Mar 08 03:08:58.256559 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 03:08:58.873452 master-0 kubenswrapper[3991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:08:58.873452 master-0 kubenswrapper[3991]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 03:08:58.873452 master-0 kubenswrapper[3991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:08:58.874722 master-0 kubenswrapper[3991]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:08:58.874722 master-0 kubenswrapper[3991]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 03:08:58.874722 master-0 kubenswrapper[3991]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:08:58.877556 master-0 kubenswrapper[3991]: I0308 03:08:58.877401 3991 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 03:08:58.882938 master-0 kubenswrapper[3991]: W0308 03:08:58.882874 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:08:58.882938 master-0 kubenswrapper[3991]: W0308 03:08:58.882929 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:08:58.882938 master-0 kubenswrapper[3991]: W0308 03:08:58.882940 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.882951 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.882988 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883016 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883025 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883033 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883040 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883051 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883061 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883070 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883078 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883087 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883096 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883105 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883113 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883122 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:08:58.883115 master-0 kubenswrapper[3991]: W0308 03:08:58.883130 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883139 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883149 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883157 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883165 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883173 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883181 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883189 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883197 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883204 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883212 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883220 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883227 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883235 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883243 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883250 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883258 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883266 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883274 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883281 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:08:58.883765 master-0 kubenswrapper[3991]: W0308 03:08:58.883289 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883297 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883316 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883324 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883331 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883339 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883347 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883355 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883362 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883372 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883381 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883391 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883399 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883407 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883416 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883423 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883431 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883439 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883447 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883454 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:08:58.884644 master-0 kubenswrapper[3991]: W0308 03:08:58.883462 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883469 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883477 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883484 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883492 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883499 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883507 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883515 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883523 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883530 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883542 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883552 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883562 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: W0308 03:08:58.883571 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883750 3991 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883766 3991 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883787 3991 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883798 3991 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883851 3991 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883861 3991 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883873 3991 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 03:08:58.885647 master-0 kubenswrapper[3991]: I0308 03:08:58.883883 3991 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883892 3991 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883926 3991 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883937 3991 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883946 3991 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883955 3991 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883964 3991 flags.go:64] FLAG: --cgroup-root="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883973 3991 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883983 3991 flags.go:64] FLAG: --client-ca-file="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.883991 3991 flags.go:64] FLAG: --cloud-config="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884000 3991 flags.go:64] FLAG: --cloud-provider="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884008 3991 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884024 3991 flags.go:64] FLAG: --cluster-domain="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884033 3991 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884042 3991 flags.go:64] FLAG: --config-dir="" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884050 3991 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884060 3991 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884071 3991 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884080 3991 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884089 3991 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884099 3991 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884107 3991 flags.go:64] FLAG: --contention-profiling="false" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884116 3991 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884124 3991 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884134 3991 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 03:08:58.886649 master-0 kubenswrapper[3991]: I0308 03:08:58.884143 3991 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884154 3991 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884163 3991 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884172 3991 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884181 3991 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884190 3991 flags.go:64] FLAG: --enable-server="true" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884199 3991 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884217 3991 flags.go:64] FLAG: --event-burst="100" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884242 3991 flags.go:64] FLAG: --event-qps="50" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884251 3991 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884260 3991 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884269 3991 flags.go:64] FLAG: --eviction-hard="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884281 3991 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884290 3991 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884299 3991 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884308 3991 flags.go:64] FLAG: --eviction-soft="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884317 3991 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884325 3991 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884334 3991 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884343 3991 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884352 3991 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884361 3991 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884370 3991 flags.go:64] FLAG: --feature-gates="" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884380 3991 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884389 3991 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 03:08:58.887856 master-0 kubenswrapper[3991]: I0308 03:08:58.884400 3991 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884409 3991 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884418 3991 flags.go:64] FLAG: --healthz-port="10248" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884427 3991 flags.go:64] FLAG: --help="false" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884436 3991 flags.go:64] FLAG: --hostname-override="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884445 3991 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884454 3991 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884463 3991 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884471 3991 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884480 3991 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884489 3991 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884497 3991 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884506 3991 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884515 3991 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884524 3991 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884534 3991 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884542 3991 flags.go:64] FLAG: --kube-reserved="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884551 3991 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884560 3991 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884580 3991 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884589 3991 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884599 3991 flags.go:64] FLAG: --lock-file="" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884608 3991 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884617 3991 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884626 3991 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884639 3991 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 03:08:58.888983 master-0 kubenswrapper[3991]: I0308 03:08:58.884648 3991 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884657 3991 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884665 3991 flags.go:64] FLAG: --logging-format="text" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884674 3991 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884684 3991 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884693 3991 flags.go:64] FLAG: --manifest-url="" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884701 3991 flags.go:64] FLAG: --manifest-url-header="" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884713 3991 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884721 3991 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884732 3991 flags.go:64] FLAG: --max-pods="110" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884741 3991 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884750 3991 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884759 3991 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884768 3991 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884777 3991 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884786 3991 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884795 3991 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884825 3991 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884835 3991 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884844 3991 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884853 3991 flags.go:64] FLAG: --pod-cidr="" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884862 3991 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884875 3991 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 03:08:58.890212 master-0 kubenswrapper[3991]: I0308 03:08:58.884884 3991 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884900 3991 flags.go:64] FLAG: --pods-per-core="0" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884933 3991 flags.go:64] FLAG: --port="10250" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884942 3991 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884952 3991 flags.go:64] FLAG: --provider-id="" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884960 3991 flags.go:64] FLAG: --qos-reserved="" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884980 3991 flags.go:64] FLAG: --read-only-port="10255" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884990 3991 flags.go:64] FLAG: --register-node="true" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.884999 3991 flags.go:64] FLAG: --register-schedulable="true" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885008 3991 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885022 3991 flags.go:64] FLAG: --registry-burst="10" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885031 3991 flags.go:64] FLAG: --registry-qps="5" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885040 3991 flags.go:64] FLAG: --reserved-cpus="" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885049 3991 flags.go:64] FLAG: --reserved-memory="" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885060 3991 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885069 3991 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885078 3991 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885086 3991 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885095 3991 flags.go:64] FLAG: --runonce="false" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885105 3991 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885114 3991 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885123 3991 flags.go:64] FLAG: --seccomp-default="false" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885131 3991 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885141 3991 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885150 3991 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885159 3991 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 03:08:58.891324 master-0 kubenswrapper[3991]: I0308 03:08:58.885168 3991 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885177 3991 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885185 3991 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885194 3991 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885203 3991 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885212 3991 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885221 3991 flags.go:64] FLAG: --system-cgroups="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885229 3991 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885243 3991 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885252 3991 flags.go:64] FLAG: --tls-cert-file="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885261 3991 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885277 3991 flags.go:64] FLAG: --tls-min-version="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885286 3991 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885295 3991 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885304 3991 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885313 3991 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885332 3991 flags.go:64] FLAG: --v="2" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885343 3991 flags.go:64] FLAG: --version="false" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885355 3991 flags.go:64] FLAG: --vmodule="" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885365 3991 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: I0308 03:08:58.885374 3991 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: W0308 03:08:58.885647 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: W0308 03:08:58.885658 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: W0308 03:08:58.885702 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:08:58.892516 master-0 kubenswrapper[3991]: W0308 03:08:58.885712 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885722 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885730 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885738 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885746 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885754 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885762 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885773 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885783 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885792 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885800 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885809 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885817 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885826 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885835 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885844 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885852 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885860 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885868 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885876 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:08:58.893553 master-0 kubenswrapper[3991]: W0308 03:08:58.885884 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885892 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885900 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885933 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885942 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885953 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885962 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885972 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.885997 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886009 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886019 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886029 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886039 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886048 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886058 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886066 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886075 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886082 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886093 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886103 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:08:58.894456 master-0 kubenswrapper[3991]: W0308 03:08:58.886110 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886119 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886126 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886134 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886141 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886156 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886164 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886172 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886179 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886187 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886195 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886202 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886210 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886218 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886225 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886233 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886241 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886252 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886262 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886271 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:08:58.895440 master-0 kubenswrapper[3991]: W0308 03:08:58.886280 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886288 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886296 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886304 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886335 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886343 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886351 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886359 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: W0308 03:08:58.886367 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:08:58.896417 master-0 kubenswrapper[3991]: I0308 03:08:58.887109 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:08:58.902630 master-0 kubenswrapper[3991]: I0308 03:08:58.902548 3991 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 03:08:58.902630 master-0 kubenswrapper[3991]: I0308 03:08:58.902616 3991 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 03:08:58.902814 master-0 kubenswrapper[3991]: W0308 03:08:58.902774 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:08:58.902814 master-0 kubenswrapper[3991]: W0308 03:08:58.902789 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:08:58.902814 master-0 kubenswrapper[3991]: W0308 03:08:58.902798 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:08:58.902814 master-0 kubenswrapper[3991]: W0308 03:08:58.902807 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:08:58.902814 master-0 kubenswrapper[3991]: W0308 03:08:58.902816 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902826 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902835 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902843 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902852 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902860 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902867 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902876 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902884 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902891 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902899 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902936 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902944 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902953 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902961 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902969 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902977 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902986 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.902993 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.903001 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:08:58.903063 master-0 kubenswrapper[3991]: W0308 03:08:58.903010 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903021 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903034 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903043 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903051 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903061 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903072 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903083 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903094 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903105 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903117 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903127 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903137 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903146 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903156 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903166 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903176 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903186 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903196 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903205 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:08:58.903956 master-0 kubenswrapper[3991]: W0308 03:08:58.903215 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903226 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903236 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903247 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903257 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903267 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903282 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903295 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903303 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903312 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903319 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903330 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903341 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903349 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903360 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903368 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903378 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903391 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903399 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:08:58.904856 master-0 kubenswrapper[3991]: W0308 03:08:58.903408 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903416 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903425 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903433 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903441 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903449 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903458 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903467 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903475 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: I0308 03:08:58.903489 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903755 3991 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903771 3991 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903780 3991 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903789 3991 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903798 3991 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:08:58.906116 master-0 kubenswrapper[3991]: W0308 03:08:58.903808 3991 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903819 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903828 3991 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903837 3991 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903846 3991 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903854 3991 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903863 3991 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903872 3991 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903881 3991 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903889 3991 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903897 3991 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903935 3991 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903943 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903952 3991 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903961 3991 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903970 3991 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903979 3991 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903987 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.903996 3991 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.904004 3991 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:08:58.906852 master-0 kubenswrapper[3991]: W0308 03:08:58.904012 3991 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904019 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904027 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904035 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904043 3991 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904050 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904058 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904066 3991 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904075 3991 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904084 3991 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904092 3991 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904099 3991 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904107 3991 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904129 3991 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904191 3991 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904206 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904218 3991 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904229 3991 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904237 3991 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904249 3991 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:08:58.907894 master-0 kubenswrapper[3991]: W0308 03:08:58.904258 3991 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904268 3991 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904276 3991 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904286 3991 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904297 3991 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904309 3991 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904320 3991 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904331 3991 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904342 3991 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904355 3991 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904367 3991 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904380 3991 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904390 3991 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904401 3991 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904413 3991 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904423 3991 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904434 3991 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904443 3991 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904454 3991 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904464 3991 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:08:58.908867 master-0 kubenswrapper[3991]: W0308 03:08:58.904474 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904484 3991 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904494 3991 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904504 3991 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904514 3991 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904527 3991 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: W0308 03:08:58.904537 3991 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: I0308 03:08:58.904549 3991 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:08:58.910102 master-0 kubenswrapper[3991]: I0308 03:08:58.906175 3991 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 03:08:58.912025 master-0 kubenswrapper[3991]: I0308 03:08:58.911975 3991 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 08 03:08:58.913613 master-0 kubenswrapper[3991]: I0308 03:08:58.913555 3991 server.go:997] "Starting client certificate rotation" Mar 08 03:08:58.913613 master-0 kubenswrapper[3991]: I0308 03:08:58.913603 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 03:08:58.913976 master-0 kubenswrapper[3991]: I0308 03:08:58.913895 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 03:08:58.945426 master-0 kubenswrapper[3991]: I0308 03:08:58.945349 3991 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:08:58.951795 master-0 kubenswrapper[3991]: I0308 03:08:58.951732 3991 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:08:58.951999 master-0 kubenswrapper[3991]: E0308 03:08:58.951890 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:08:58.971481 master-0 kubenswrapper[3991]: I0308 03:08:58.971415 3991 log.go:25] "Validated CRI v1 runtime API" Mar 08 03:08:58.982411 master-0 kubenswrapper[3991]: I0308 03:08:58.982349 3991 log.go:25] "Validated CRI v1 image API" Mar 08 03:08:58.985593 master-0 kubenswrapper[3991]: I0308 03:08:58.985495 3991 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 03:08:58.991741 master-0 kubenswrapper[3991]: I0308 03:08:58.991667 3991 fs.go:135] Filesystem UUIDs: map[0b52d2da-0de4-4c5d-93b4-a42985f64420:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 08 03:08:58.991741 master-0 kubenswrapper[3991]: I0308 03:08:58.991716 3991 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 08 03:08:59.021081 master-0 kubenswrapper[3991]: I0308 03:08:59.020565 3991 manager.go:217] Machine: {Timestamp:2026-03-08 03:08:59.017596202 +0000 UTC m=+0.583533497 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ca41eca1edff4210bb11657bca9f1e6d SystemUUID:ca41eca1-edff-4210-bb11-657bca9f1e6d BootID:c341f940-4e88-4b9b-a4b4-98442bfad22d Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b5:5c:2e Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:da:1c:db:80:ac:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 03:08:59.021081 master-0 kubenswrapper[3991]: I0308 03:08:59.020987 3991 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 03:08:59.021465 master-0 kubenswrapper[3991]: I0308 03:08:59.021237 3991 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 03:08:59.022837 master-0 kubenswrapper[3991]: I0308 03:08:59.022776 3991 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 03:08:59.023266 master-0 kubenswrapper[3991]: I0308 03:08:59.023192 3991 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 03:08:59.023670 master-0 kubenswrapper[3991]: I0308 03:08:59.023247 3991 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 03:08:59.023807 master-0 kubenswrapper[3991]: I0308 03:08:59.023701 3991 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 03:08:59.023807 master-0 kubenswrapper[3991]: I0308 03:08:59.023728 3991 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 03:08:59.024006 master-0 kubenswrapper[3991]: I0308 03:08:59.023837 3991 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:08:59.024006 master-0 kubenswrapper[3991]: I0308 03:08:59.023893 3991 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:08:59.024178 master-0 kubenswrapper[3991]: I0308 03:08:59.024154 3991 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:08:59.024395 master-0 kubenswrapper[3991]: I0308 03:08:59.024339 3991 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 03:08:59.027984 master-0 kubenswrapper[3991]: I0308 03:08:59.027938 3991 kubelet.go:418] "Attempting to sync node with API server" Mar 08 03:08:59.027984 master-0 kubenswrapper[3991]: I0308 03:08:59.027978 3991 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 03:08:59.028130 master-0 kubenswrapper[3991]: I0308 03:08:59.028021 3991 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 03:08:59.028130 master-0 kubenswrapper[3991]: I0308 03:08:59.028046 3991 kubelet.go:324] "Adding apiserver pod source" Mar 08 03:08:59.028130 master-0 kubenswrapper[3991]: I0308 03:08:59.028091 3991 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 03:08:59.031834 master-0 kubenswrapper[3991]: W0308 03:08:59.031730 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:08:59.031834 master-0 kubenswrapper[3991]: W0308 03:08:59.031785 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:08:59.031975 master-0 kubenswrapper[3991]: E0308 03:08:59.031846 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:08:59.031975 master-0 kubenswrapper[3991]: E0308 03:08:59.031862 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:08:59.034047 master-0 kubenswrapper[3991]: I0308 03:08:59.033949 3991 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 03:08:59.037377 master-0 kubenswrapper[3991]: I0308 03:08:59.037323 3991 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 03:08:59.037856 master-0 kubenswrapper[3991]: I0308 03:08:59.037756 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 03:08:59.037856 master-0 kubenswrapper[3991]: I0308 03:08:59.037804 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037864 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037884 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037930 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037952 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037969 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.037987 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.038012 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.038031 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.038053 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 03:08:59.038098 master-0 kubenswrapper[3991]: I0308 03:08:59.038101 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 03:08:59.040065 master-0 kubenswrapper[3991]: I0308 03:08:59.039997 3991 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 03:08:59.040978 master-0 kubenswrapper[3991]: I0308 03:08:59.040798 3991 server.go:1280] "Started kubelet" Mar 08 03:08:59.042636 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 03:08:59.043064 master-0 kubenswrapper[3991]: I0308 03:08:59.041884 3991 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 03:08:59.043693 master-0 kubenswrapper[3991]: I0308 03:08:59.043551 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:08:59.043815 master-0 kubenswrapper[3991]: I0308 03:08:59.041880 3991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 03:08:59.043878 master-0 kubenswrapper[3991]: I0308 03:08:59.043847 3991 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 03:08:59.044466 master-0 kubenswrapper[3991]: I0308 03:08:59.044433 3991 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 03:08:59.046696 master-0 kubenswrapper[3991]: I0308 03:08:59.046655 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 03:08:59.046836 master-0 kubenswrapper[3991]: I0308 03:08:59.046716 3991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 03:08:59.046836 master-0 kubenswrapper[3991]: I0308 03:08:59.046756 3991 server.go:449] "Adding debug handlers to kubelet server" Mar 08 03:08:59.053105 master-0 kubenswrapper[3991]: E0308 03:08:59.053039 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:08:59.053357 master-0 kubenswrapper[3991]: I0308 03:08:59.053317 3991 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 03:08:59.054639 master-0 kubenswrapper[3991]: I0308 03:08:59.053293 3991 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 03:08:59.054639 master-0 kubenswrapper[3991]: I0308 03:08:59.054627 3991 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 03:08:59.054968 master-0 kubenswrapper[3991]: I0308 03:08:59.054936 3991 reconstruct.go:97] "Volume reconstruction finished" Mar 08 03:08:59.054968 master-0 kubenswrapper[3991]: I0308 03:08:59.054958 3991 reconciler.go:26] "Reconciler: start to sync state" Mar 08 03:08:59.055386 master-0 kubenswrapper[3991]: W0308 03:08:59.055275 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:08:59.055480 master-0 kubenswrapper[3991]: E0308 03:08:59.055414 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:08:59.056140 master-0 kubenswrapper[3991]: E0308 03:08:59.055753 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 08 03:08:59.056409 master-0 kubenswrapper[3991]: E0308 03:08:59.054409 3991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189abeef77d6faf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,LastTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:08:59.058585 master-0 kubenswrapper[3991]: I0308 03:08:59.058522 3991 factory.go:55] Registering systemd factory Mar 08 03:08:59.058585 master-0 kubenswrapper[3991]: I0308 03:08:59.058582 3991 factory.go:221] Registration of the systemd container factory successfully Mar 08 03:08:59.060996 master-0 kubenswrapper[3991]: I0308 03:08:59.060936 3991 factory.go:153] Registering CRI-O factory Mar 08 03:08:59.060996 master-0 kubenswrapper[3991]: I0308 03:08:59.060985 3991 factory.go:221] Registration of the crio container factory successfully Mar 08 03:08:59.061859 master-0 kubenswrapper[3991]: I0308 03:08:59.061798 3991 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 03:08:59.061992 master-0 kubenswrapper[3991]: I0308 03:08:59.061862 3991 factory.go:103] Registering Raw factory Mar 08 03:08:59.061992 master-0 kubenswrapper[3991]: I0308 03:08:59.061934 3991 manager.go:1196] Started watching for new ooms in manager Mar 08 03:08:59.063210 master-0 kubenswrapper[3991]: I0308 03:08:59.063152 3991 manager.go:319] Starting recovery of all containers Mar 08 03:08:59.063321 master-0 kubenswrapper[3991]: E0308 03:08:59.063205 3991 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 08 03:08:59.092023 master-0 kubenswrapper[3991]: I0308 03:08:59.091483 3991 manager.go:324] Recovery completed Mar 08 03:08:59.102955 master-0 kubenswrapper[3991]: I0308 03:08:59.102884 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.105341 master-0 kubenswrapper[3991]: I0308 03:08:59.105260 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.105492 master-0 kubenswrapper[3991]: I0308 03:08:59.105352 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.105492 master-0 kubenswrapper[3991]: I0308 03:08:59.105382 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.106465 master-0 kubenswrapper[3991]: I0308 03:08:59.106424 3991 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 03:08:59.106465 master-0 kubenswrapper[3991]: I0308 03:08:59.106449 3991 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 03:08:59.106605 master-0 kubenswrapper[3991]: I0308 03:08:59.106474 3991 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:08:59.111100 master-0 kubenswrapper[3991]: I0308 03:08:59.111058 3991 policy_none.go:49] "None policy: Start" Mar 08 03:08:59.111827 master-0 kubenswrapper[3991]: I0308 03:08:59.111765 3991 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 03:08:59.111933 master-0 kubenswrapper[3991]: I0308 03:08:59.111832 3991 state_mem.go:35] "Initializing new in-memory state store" Mar 08 03:08:59.153446 master-0 kubenswrapper[3991]: E0308 03:08:59.153392 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:08:59.202216 master-0 kubenswrapper[3991]: I0308 03:08:59.202165 3991 manager.go:334] "Starting Device Plugin manager" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.202224 3991 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.202265 3991 server.go:79] "Starting device plugin registration server" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.202744 3991 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.202758 3991 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.203797 3991 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.204082 3991 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.204104 3991 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: E0308 03:08:59.206615 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.213464 3991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.215325 3991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.215841 3991 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 03:08:59.215942 master-0 kubenswrapper[3991]: I0308 03:08:59.215963 3991 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 03:08:59.216640 master-0 kubenswrapper[3991]: E0308 03:08:59.216057 3991 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 08 03:08:59.216970 master-0 kubenswrapper[3991]: W0308 03:08:59.216831 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:08:59.217234 master-0 kubenswrapper[3991]: E0308 03:08:59.217167 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:08:59.257311 master-0 kubenswrapper[3991]: E0308 03:08:59.257232 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 08 03:08:59.303552 master-0 kubenswrapper[3991]: I0308 03:08:59.303482 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.305001 master-0 kubenswrapper[3991]: I0308 03:08:59.304952 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.305084 master-0 kubenswrapper[3991]: I0308 03:08:59.305010 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.305084 master-0 kubenswrapper[3991]: I0308 03:08:59.305029 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.305084 master-0 kubenswrapper[3991]: I0308 03:08:59.305068 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:08:59.306137 master-0 kubenswrapper[3991]: E0308 03:08:59.306072 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:08:59.317220 master-0 kubenswrapper[3991]: I0308 03:08:59.317167 3991 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 03:08:59.317314 master-0 kubenswrapper[3991]: I0308 03:08:59.317262 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.318385 master-0 kubenswrapper[3991]: I0308 03:08:59.318342 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.318463 master-0 kubenswrapper[3991]: I0308 03:08:59.318401 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.318463 master-0 kubenswrapper[3991]: I0308 03:08:59.318420 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.318653 master-0 kubenswrapper[3991]: I0308 03:08:59.318618 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.319054 master-0 kubenswrapper[3991]: I0308 03:08:59.319006 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.319130 master-0 kubenswrapper[3991]: I0308 03:08:59.319067 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.320119 master-0 kubenswrapper[3991]: I0308 03:08:59.320072 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.320119 master-0 kubenswrapper[3991]: I0308 03:08:59.320113 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.320262 master-0 kubenswrapper[3991]: I0308 03:08:59.320120 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.320262 master-0 kubenswrapper[3991]: I0308 03:08:59.320131 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.320262 master-0 kubenswrapper[3991]: I0308 03:08:59.320150 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.320262 master-0 kubenswrapper[3991]: I0308 03:08:59.320167 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.320468 master-0 kubenswrapper[3991]: I0308 03:08:59.320285 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.320613 master-0 kubenswrapper[3991]: I0308 03:08:59.320574 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.320680 master-0 kubenswrapper[3991]: I0308 03:08:59.320613 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.321277 master-0 kubenswrapper[3991]: I0308 03:08:59.321222 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.321277 master-0 kubenswrapper[3991]: I0308 03:08:59.321276 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.321649 master-0 kubenswrapper[3991]: I0308 03:08:59.321381 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.321649 master-0 kubenswrapper[3991]: I0308 03:08:59.321559 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.321649 master-0 kubenswrapper[3991]: I0308 03:08:59.321589 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.321649 master-0 kubenswrapper[3991]: I0308 03:08:59.321605 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.321944 master-0 kubenswrapper[3991]: I0308 03:08:59.321768 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.321944 master-0 kubenswrapper[3991]: I0308 03:08:59.321863 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.321944 master-0 kubenswrapper[3991]: I0308 03:08:59.321931 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.322866 master-0 kubenswrapper[3991]: I0308 03:08:59.322804 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.322866 master-0 kubenswrapper[3991]: I0308 03:08:59.322858 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.323075 master-0 kubenswrapper[3991]: I0308 03:08:59.322878 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.323075 master-0 kubenswrapper[3991]: I0308 03:08:59.323043 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.323228 master-0 kubenswrapper[3991]: I0308 03:08:59.323184 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.323228 master-0 kubenswrapper[3991]: I0308 03:08:59.323213 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.323228 master-0 kubenswrapper[3991]: I0308 03:08:59.323229 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.323447 master-0 kubenswrapper[3991]: I0308 03:08:59.323305 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.323447 master-0 kubenswrapper[3991]: I0308 03:08:59.323361 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.324505 master-0 kubenswrapper[3991]: I0308 03:08:59.324451 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.324505 master-0 kubenswrapper[3991]: I0308 03:08:59.324497 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.324686 master-0 kubenswrapper[3991]: I0308 03:08:59.324514 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.324686 master-0 kubenswrapper[3991]: I0308 03:08:59.324606 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.324686 master-0 kubenswrapper[3991]: I0308 03:08:59.324639 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.324686 master-0 kubenswrapper[3991]: I0308 03:08:59.324657 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.324931 master-0 kubenswrapper[3991]: I0308 03:08:59.324859 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.324931 master-0 kubenswrapper[3991]: I0308 03:08:59.324894 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.325951 master-0 kubenswrapper[3991]: I0308 03:08:59.325861 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.325951 master-0 kubenswrapper[3991]: I0308 03:08:59.325931 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.325951 master-0 kubenswrapper[3991]: I0308 03:08:59.325951 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.355524 master-0 kubenswrapper[3991]: I0308 03:08:59.355449 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.355524 master-0 kubenswrapper[3991]: I0308 03:08:59.355526 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.355739 master-0 kubenswrapper[3991]: I0308 03:08:59.355560 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.355739 master-0 kubenswrapper[3991]: I0308 03:08:59.355590 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.355739 master-0 kubenswrapper[3991]: I0308 03:08:59.355659 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.355739 master-0 kubenswrapper[3991]: I0308 03:08:59.355692 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.356089 master-0 kubenswrapper[3991]: I0308 03:08:59.355769 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.356089 master-0 kubenswrapper[3991]: I0308 03:08:59.355838 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.356089 master-0 kubenswrapper[3991]: I0308 03:08:59.355892 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.356089 master-0 kubenswrapper[3991]: I0308 03:08:59.355979 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.356089 master-0 kubenswrapper[3991]: I0308 03:08:59.356049 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.356367 master-0 kubenswrapper[3991]: I0308 03:08:59.356098 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.356367 master-0 kubenswrapper[3991]: I0308 03:08:59.356145 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.356367 master-0 kubenswrapper[3991]: I0308 03:08:59.356203 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.356367 master-0 kubenswrapper[3991]: I0308 03:08:59.356245 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.356367 master-0 kubenswrapper[3991]: I0308 03:08:59.356288 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.356622 master-0 kubenswrapper[3991]: I0308 03:08:59.356439 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.456961 master-0 kubenswrapper[3991]: I0308 03:08:59.456862 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.457133 master-0 kubenswrapper[3991]: I0308 03:08:59.456973 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.457196 master-0 kubenswrapper[3991]: I0308 03:08:59.457140 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.457390 master-0 kubenswrapper[3991]: I0308 03:08:59.457341 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.457549 master-0 kubenswrapper[3991]: I0308 03:08:59.457463 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.457549 master-0 kubenswrapper[3991]: I0308 03:08:59.457524 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.457668 master-0 kubenswrapper[3991]: I0308 03:08:59.457589 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.457668 master-0 kubenswrapper[3991]: I0308 03:08:59.457561 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457668 master-0 kubenswrapper[3991]: I0308 03:08:59.457644 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457653 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457691 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457751 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457751 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457804 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.457833 master-0 kubenswrapper[3991]: I0308 03:08:59.457802 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.457864 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.457876 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.457970 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.457998 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.458073 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.458109 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458182 master-0 kubenswrapper[3991]: I0308 03:08:59.458166 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458208 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458256 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458275 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458308 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458308 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458356 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458357 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458402 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458438 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458513 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.458550 master-0 kubenswrapper[3991]: I0308 03:08:59.458533 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.459147 master-0 kubenswrapper[3991]: I0308 03:08:59.458583 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.507067 master-0 kubenswrapper[3991]: I0308 03:08:59.506935 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.508544 master-0 kubenswrapper[3991]: I0308 03:08:59.508456 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.508544 master-0 kubenswrapper[3991]: I0308 03:08:59.508532 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.508544 master-0 kubenswrapper[3991]: I0308 03:08:59.508552 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.508844 master-0 kubenswrapper[3991]: I0308 03:08:59.508636 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:08:59.510145 master-0 kubenswrapper[3991]: E0308 03:08:59.510029 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:08:59.659759 master-0 kubenswrapper[3991]: E0308 03:08:59.659512 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 08 03:08:59.668144 master-0 kubenswrapper[3991]: I0308 03:08:59.668061 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:08:59.682834 master-0 kubenswrapper[3991]: I0308 03:08:59.682765 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:08:59.705356 master-0 kubenswrapper[3991]: I0308 03:08:59.705264 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:08:59.727252 master-0 kubenswrapper[3991]: I0308 03:08:59.727153 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:08:59.738638 master-0 kubenswrapper[3991]: I0308 03:08:59.738546 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:08:59.911297 master-0 kubenswrapper[3991]: I0308 03:08:59.911114 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:08:59.912713 master-0 kubenswrapper[3991]: I0308 03:08:59.912651 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:08:59.912804 master-0 kubenswrapper[3991]: I0308 03:08:59.912724 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:08:59.912804 master-0 kubenswrapper[3991]: I0308 03:08:59.912743 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:08:59.912953 master-0 kubenswrapper[3991]: I0308 03:08:59.912815 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:08:59.914099 master-0 kubenswrapper[3991]: E0308 03:08:59.914034 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:09:00.045342 master-0 kubenswrapper[3991]: I0308 03:09:00.045219 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:00.112315 master-0 kubenswrapper[3991]: W0308 03:09:00.112164 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:00.112315 master-0 kubenswrapper[3991]: E0308 03:09:00.112309 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:00.301573 master-0 kubenswrapper[3991]: W0308 03:09:00.301366 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:00.301573 master-0 kubenswrapper[3991]: E0308 03:09:00.301486 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:00.329307 master-0 kubenswrapper[3991]: W0308 03:09:00.329166 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3 WatchSource:0}: Error finding container 0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3: Status 404 returned error can't find the container with id 0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3 Mar 08 03:09:00.336966 master-0 kubenswrapper[3991]: I0308 03:09:00.336887 3991 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:09:00.343718 master-0 kubenswrapper[3991]: W0308 03:09:00.343633 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63 WatchSource:0}: Error finding container dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63: Status 404 returned error can't find the container with id dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63 Mar 08 03:09:00.362280 master-0 kubenswrapper[3991]: W0308 03:09:00.362203 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318 WatchSource:0}: Error finding container bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318: Status 404 returned error can't find the container with id bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318 Mar 08 03:09:00.407543 master-0 kubenswrapper[3991]: W0308 03:09:00.407445 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a WatchSource:0}: Error finding container 00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a: Status 404 returned error can't find the container with id 00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a Mar 08 03:09:00.431520 master-0 kubenswrapper[3991]: W0308 03:09:00.431439 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb WatchSource:0}: Error finding container 9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb: Status 404 returned error can't find the container with id 9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb Mar 08 03:09:00.461365 master-0 kubenswrapper[3991]: E0308 03:09:00.461275 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 08 03:09:00.530210 master-0 kubenswrapper[3991]: W0308 03:09:00.530054 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:00.530210 master-0 kubenswrapper[3991]: E0308 03:09:00.530189 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:00.631853 master-0 kubenswrapper[3991]: W0308 03:09:00.631677 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:00.631853 master-0 kubenswrapper[3991]: E0308 03:09:00.631769 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:00.715268 master-0 kubenswrapper[3991]: I0308 03:09:00.715161 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:00.717238 master-0 kubenswrapper[3991]: I0308 03:09:00.717188 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:00.717355 master-0 kubenswrapper[3991]: I0308 03:09:00.717256 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:00.717355 master-0 kubenswrapper[3991]: I0308 03:09:00.717275 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:00.717355 master-0 kubenswrapper[3991]: I0308 03:09:00.717347 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:00.718789 master-0 kubenswrapper[3991]: E0308 03:09:00.718708 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:09:01.037422 master-0 kubenswrapper[3991]: I0308 03:09:01.037285 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 03:09:01.039518 master-0 kubenswrapper[3991]: E0308 03:09:01.039473 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:01.045301 master-0 kubenswrapper[3991]: I0308 03:09:01.045269 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:01.223699 master-0 kubenswrapper[3991]: I0308 03:09:01.223632 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318"} Mar 08 03:09:01.224807 master-0 kubenswrapper[3991]: I0308 03:09:01.224764 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63"} Mar 08 03:09:01.228767 master-0 kubenswrapper[3991]: I0308 03:09:01.228729 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3"} Mar 08 03:09:01.229792 master-0 kubenswrapper[3991]: I0308 03:09:01.229756 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb"} Mar 08 03:09:01.230975 master-0 kubenswrapper[3991]: I0308 03:09:01.230951 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a"} Mar 08 03:09:02.045346 master-0 kubenswrapper[3991]: I0308 03:09:02.045274 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:02.062563 master-0 kubenswrapper[3991]: E0308 03:09:02.062510 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 08 03:09:02.234578 master-0 kubenswrapper[3991]: I0308 03:09:02.234483 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168"} Mar 08 03:09:02.234733 master-0 kubenswrapper[3991]: I0308 03:09:02.234585 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:02.235232 master-0 kubenswrapper[3991]: I0308 03:09:02.235210 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:02.235232 master-0 kubenswrapper[3991]: I0308 03:09:02.235233 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:02.235313 master-0 kubenswrapper[3991]: I0308 03:09:02.235241 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:02.319622 master-0 kubenswrapper[3991]: I0308 03:09:02.319564 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:02.320703 master-0 kubenswrapper[3991]: I0308 03:09:02.320673 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:02.320772 master-0 kubenswrapper[3991]: I0308 03:09:02.320709 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:02.320772 master-0 kubenswrapper[3991]: I0308 03:09:02.320721 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:02.320772 master-0 kubenswrapper[3991]: I0308 03:09:02.320766 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:02.321425 master-0 kubenswrapper[3991]: E0308 03:09:02.321394 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:09:02.421914 master-0 kubenswrapper[3991]: W0308 03:09:02.421855 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:02.422067 master-0 kubenswrapper[3991]: E0308 03:09:02.421924 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:02.546919 master-0 kubenswrapper[3991]: W0308 03:09:02.546852 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:02.547096 master-0 kubenswrapper[3991]: E0308 03:09:02.546927 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:02.558220 master-0 kubenswrapper[3991]: W0308 03:09:02.558180 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:02.558290 master-0 kubenswrapper[3991]: E0308 03:09:02.558234 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:02.805816 master-0 kubenswrapper[3991]: W0308 03:09:02.805662 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:02.805816 master-0 kubenswrapper[3991]: E0308 03:09:02.805760 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:03.045662 master-0 kubenswrapper[3991]: I0308 03:09:03.045603 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:03.237777 master-0 kubenswrapper[3991]: I0308 03:09:03.237710 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10"} Mar 08 03:09:03.237777 master-0 kubenswrapper[3991]: I0308 03:09:03.237757 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56"} Mar 08 03:09:03.237777 master-0 kubenswrapper[3991]: I0308 03:09:03.237771 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:03.238511 master-0 kubenswrapper[3991]: I0308 03:09:03.238482 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:03.238562 master-0 kubenswrapper[3991]: I0308 03:09:03.238515 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:03.238562 master-0 kubenswrapper[3991]: I0308 03:09:03.238527 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:03.239838 master-0 kubenswrapper[3991]: I0308 03:09:03.239713 3991 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168" exitCode=0 Mar 08 03:09:03.239838 master-0 kubenswrapper[3991]: I0308 03:09:03.239760 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168"} Mar 08 03:09:03.239838 master-0 kubenswrapper[3991]: I0308 03:09:03.239841 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:03.240543 master-0 kubenswrapper[3991]: I0308 03:09:03.240524 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:03.240543 master-0 kubenswrapper[3991]: I0308 03:09:03.240542 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:03.240611 master-0 kubenswrapper[3991]: I0308 03:09:03.240551 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:04.045548 master-0 kubenswrapper[3991]: I0308 03:09:04.045492 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:04.243138 master-0 kubenswrapper[3991]: I0308 03:09:04.243102 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 08 03:09:04.244147 master-0 kubenswrapper[3991]: I0308 03:09:04.244111 3991 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="395086e2c865ab0494f4bbe3e309504b8b9396d44ab593c0c704d19599e311db" exitCode=1 Mar 08 03:09:04.244230 master-0 kubenswrapper[3991]: I0308 03:09:04.244198 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:04.244273 master-0 kubenswrapper[3991]: I0308 03:09:04.244218 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"395086e2c865ab0494f4bbe3e309504b8b9396d44ab593c0c704d19599e311db"} Mar 08 03:09:04.244346 master-0 kubenswrapper[3991]: I0308 03:09:04.244300 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:04.244954 master-0 kubenswrapper[3991]: I0308 03:09:04.244887 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:04.245008 master-0 kubenswrapper[3991]: I0308 03:09:04.244964 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:04.245008 master-0 kubenswrapper[3991]: I0308 03:09:04.244982 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:04.245061 master-0 kubenswrapper[3991]: I0308 03:09:04.245040 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:04.245097 master-0 kubenswrapper[3991]: I0308 03:09:04.245070 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:04.245097 master-0 kubenswrapper[3991]: I0308 03:09:04.245081 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:04.245415 master-0 kubenswrapper[3991]: I0308 03:09:04.245398 3991 scope.go:117] "RemoveContainer" containerID="395086e2c865ab0494f4bbe3e309504b8b9396d44ab593c0c704d19599e311db" Mar 08 03:09:05.045180 master-0 kubenswrapper[3991]: I0308 03:09:05.045118 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:05.215964 master-0 kubenswrapper[3991]: I0308 03:09:05.215315 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 03:09:05.216541 master-0 kubenswrapper[3991]: E0308 03:09:05.216503 3991 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:05.248327 master-0 kubenswrapper[3991]: I0308 03:09:05.248291 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 03:09:05.248772 master-0 kubenswrapper[3991]: I0308 03:09:05.248711 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 08 03:09:05.249153 master-0 kubenswrapper[3991]: I0308 03:09:05.249123 3991 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd" exitCode=1 Mar 08 03:09:05.249194 master-0 kubenswrapper[3991]: I0308 03:09:05.249161 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd"} Mar 08 03:09:05.249231 master-0 kubenswrapper[3991]: I0308 03:09:05.249197 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:05.249317 master-0 kubenswrapper[3991]: I0308 03:09:05.249199 3991 scope.go:117] "RemoveContainer" containerID="395086e2c865ab0494f4bbe3e309504b8b9396d44ab593c0c704d19599e311db" Mar 08 03:09:05.249827 master-0 kubenswrapper[3991]: I0308 03:09:05.249799 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:05.249866 master-0 kubenswrapper[3991]: I0308 03:09:05.249833 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:05.249866 master-0 kubenswrapper[3991]: I0308 03:09:05.249845 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:05.250255 master-0 kubenswrapper[3991]: I0308 03:09:05.250228 3991 scope.go:117] "RemoveContainer" containerID="fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd" Mar 08 03:09:05.250404 master-0 kubenswrapper[3991]: E0308 03:09:05.250374 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 03:09:05.263962 master-0 kubenswrapper[3991]: E0308 03:09:05.263925 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 08 03:09:05.522600 master-0 kubenswrapper[3991]: I0308 03:09:05.522529 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:05.523788 master-0 kubenswrapper[3991]: I0308 03:09:05.523760 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:05.523846 master-0 kubenswrapper[3991]: I0308 03:09:05.523801 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:05.523846 master-0 kubenswrapper[3991]: I0308 03:09:05.523813 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:05.523893 master-0 kubenswrapper[3991]: I0308 03:09:05.523858 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:05.525002 master-0 kubenswrapper[3991]: E0308 03:09:05.524942 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 08 03:09:05.797232 master-0 kubenswrapper[3991]: W0308 03:09:05.797133 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:05.797383 master-0 kubenswrapper[3991]: E0308 03:09:05.797251 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:06.045541 master-0 kubenswrapper[3991]: I0308 03:09:06.045473 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:06.251263 master-0 kubenswrapper[3991]: I0308 03:09:06.251159 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:06.252160 master-0 kubenswrapper[3991]: I0308 03:09:06.252125 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:06.252227 master-0 kubenswrapper[3991]: I0308 03:09:06.252173 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:06.252227 master-0 kubenswrapper[3991]: I0308 03:09:06.252188 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:06.252510 master-0 kubenswrapper[3991]: I0308 03:09:06.252484 3991 scope.go:117] "RemoveContainer" containerID="fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd" Mar 08 03:09:06.252650 master-0 kubenswrapper[3991]: E0308 03:09:06.252628 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 03:09:07.045423 master-0 kubenswrapper[3991]: I0308 03:09:07.045300 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:07.494459 master-0 kubenswrapper[3991]: W0308 03:09:07.494273 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:07.494459 master-0 kubenswrapper[3991]: E0308 03:09:07.494411 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:07.543125 master-0 kubenswrapper[3991]: W0308 03:09:07.543028 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:07.543125 master-0 kubenswrapper[3991]: E0308 03:09:07.543105 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:08.047447 master-0 kubenswrapper[3991]: I0308 03:09:08.047318 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:08.257475 master-0 kubenswrapper[3991]: I0308 03:09:08.257431 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 03:09:08.259735 master-0 kubenswrapper[3991]: I0308 03:09:08.259690 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2"} Mar 08 03:09:08.262017 master-0 kubenswrapper[3991]: I0308 03:09:08.261645 3991 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91" exitCode=0 Mar 08 03:09:08.262017 master-0 kubenswrapper[3991]: I0308 03:09:08.261686 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91"} Mar 08 03:09:08.262017 master-0 kubenswrapper[3991]: I0308 03:09:08.261838 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:08.263180 master-0 kubenswrapper[3991]: I0308 03:09:08.263139 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:08.263180 master-0 kubenswrapper[3991]: I0308 03:09:08.263177 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:08.263338 master-0 kubenswrapper[3991]: I0308 03:09:08.263189 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:08.264786 master-0 kubenswrapper[3991]: I0308 03:09:08.264732 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f"} Mar 08 03:09:08.264900 master-0 kubenswrapper[3991]: I0308 03:09:08.264791 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:08.265888 master-0 kubenswrapper[3991]: I0308 03:09:08.265859 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:08.265888 master-0 kubenswrapper[3991]: I0308 03:09:08.265881 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:08.265888 master-0 kubenswrapper[3991]: I0308 03:09:08.265890 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:08.269080 master-0 kubenswrapper[3991]: I0308 03:09:08.268985 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:08.270208 master-0 kubenswrapper[3991]: I0308 03:09:08.270163 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:08.270208 master-0 kubenswrapper[3991]: I0308 03:09:08.270190 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:08.270208 master-0 kubenswrapper[3991]: I0308 03:09:08.270202 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:08.329957 master-0 kubenswrapper[3991]: E0308 03:09:08.329720 3991 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189abeef77d6faf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,LastTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:08.554281 master-0 kubenswrapper[3991]: W0308 03:09:08.554176 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 08 03:09:08.554764 master-0 kubenswrapper[3991]: E0308 03:09:08.554291 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 08 03:09:09.206840 master-0 kubenswrapper[3991]: E0308 03:09:09.206796 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:09:09.269299 master-0 kubenswrapper[3991]: I0308 03:09:09.269216 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"bf4fabb9c08963210bf1f0d197a394d399879939569bdcc3789dd4b487cec36f"} Mar 08 03:09:09.272356 master-0 kubenswrapper[3991]: I0308 03:09:09.268625 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:09.273297 master-0 kubenswrapper[3991]: I0308 03:09:09.273257 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:09.273297 master-0 kubenswrapper[3991]: I0308 03:09:09.273298 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:09.273434 master-0 kubenswrapper[3991]: I0308 03:09:09.273307 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:09.862403 master-0 kubenswrapper[3991]: I0308 03:09:09.862337 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:10.051937 master-0 kubenswrapper[3991]: I0308 03:09:10.051865 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:11.050988 master-0 kubenswrapper[3991]: I0308 03:09:11.049924 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:11.668457 master-0 kubenswrapper[3991]: E0308 03:09:11.668409 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 03:09:11.925360 master-0 kubenswrapper[3991]: I0308 03:09:11.925251 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:11.927183 master-0 kubenswrapper[3991]: I0308 03:09:11.926540 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:11.927183 master-0 kubenswrapper[3991]: I0308 03:09:11.926594 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:11.927183 master-0 kubenswrapper[3991]: I0308 03:09:11.926614 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:11.927183 master-0 kubenswrapper[3991]: I0308 03:09:11.926668 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:11.934172 master-0 kubenswrapper[3991]: E0308 03:09:11.934117 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 03:09:12.052323 master-0 kubenswrapper[3991]: I0308 03:09:12.052158 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:12.276749 master-0 kubenswrapper[3991]: I0308 03:09:12.276677 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034"} Mar 08 03:09:12.276749 master-0 kubenswrapper[3991]: I0308 03:09:12.276709 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:12.278093 master-0 kubenswrapper[3991]: I0308 03:09:12.278029 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:12.278235 master-0 kubenswrapper[3991]: I0308 03:09:12.278101 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:12.278235 master-0 kubenswrapper[3991]: I0308 03:09:12.278121 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:12.279695 master-0 kubenswrapper[3991]: I0308 03:09:12.279649 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d"} Mar 08 03:09:12.279820 master-0 kubenswrapper[3991]: I0308 03:09:12.279760 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:12.280856 master-0 kubenswrapper[3991]: I0308 03:09:12.280822 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:12.281103 master-0 kubenswrapper[3991]: I0308 03:09:12.281076 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:12.281249 master-0 kubenswrapper[3991]: I0308 03:09:12.281229 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:12.308804 master-0 kubenswrapper[3991]: I0308 03:09:12.308662 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:09:12.316939 master-0 kubenswrapper[3991]: I0308 03:09:12.316830 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:09:12.900846 master-0 kubenswrapper[3991]: I0308 03:09:12.900778 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:12.908145 master-0 kubenswrapper[3991]: I0308 03:09:12.908095 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:13.052983 master-0 kubenswrapper[3991]: I0308 03:09:13.052865 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:13.282967 master-0 kubenswrapper[3991]: I0308 03:09:13.282837 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:13.283250 master-0 kubenswrapper[3991]: I0308 03:09:13.283010 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:13.283250 master-0 kubenswrapper[3991]: I0308 03:09:13.283170 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:09:13.283427 master-0 kubenswrapper[3991]: I0308 03:09:13.283315 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:13.284176 master-0 kubenswrapper[3991]: I0308 03:09:13.284128 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:13.284176 master-0 kubenswrapper[3991]: I0308 03:09:13.284187 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:13.284430 master-0 kubenswrapper[3991]: I0308 03:09:13.284211 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:13.284430 master-0 kubenswrapper[3991]: I0308 03:09:13.284323 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:13.284430 master-0 kubenswrapper[3991]: I0308 03:09:13.284366 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:13.284430 master-0 kubenswrapper[3991]: I0308 03:09:13.284384 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:13.290825 master-0 kubenswrapper[3991]: I0308 03:09:13.290781 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:09:13.338048 master-0 kubenswrapper[3991]: I0308 03:09:13.337978 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 03:09:13.360811 master-0 kubenswrapper[3991]: I0308 03:09:13.360732 3991 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 03:09:14.053053 master-0 kubenswrapper[3991]: I0308 03:09:14.052989 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:14.285179 master-0 kubenswrapper[3991]: I0308 03:09:14.285066 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:14.285179 master-0 kubenswrapper[3991]: I0308 03:09:14.285162 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:14.286156 master-0 kubenswrapper[3991]: I0308 03:09:14.286109 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:14.286289 master-0 kubenswrapper[3991]: I0308 03:09:14.286176 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:14.286289 master-0 kubenswrapper[3991]: I0308 03:09:14.286202 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:14.286689 master-0 kubenswrapper[3991]: I0308 03:09:14.286654 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:14.286849 master-0 kubenswrapper[3991]: I0308 03:09:14.286827 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:14.287001 master-0 kubenswrapper[3991]: I0308 03:09:14.286980 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:15.053137 master-0 kubenswrapper[3991]: I0308 03:09:15.053035 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:15.287470 master-0 kubenswrapper[3991]: I0308 03:09:15.287385 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:15.288675 master-0 kubenswrapper[3991]: I0308 03:09:15.288621 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:15.288792 master-0 kubenswrapper[3991]: I0308 03:09:15.288679 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:15.288792 master-0 kubenswrapper[3991]: I0308 03:09:15.288703 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:16.050517 master-0 kubenswrapper[3991]: I0308 03:09:16.050458 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:16.714326 master-0 kubenswrapper[3991]: W0308 03:09:16.714172 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 08 03:09:16.714326 master-0 kubenswrapper[3991]: E0308 03:09:16.714261 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 03:09:17.050264 master-0 kubenswrapper[3991]: I0308 03:09:17.050193 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:17.609049 master-0 kubenswrapper[3991]: W0308 03:09:17.608876 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 08 03:09:17.609049 master-0 kubenswrapper[3991]: E0308 03:09:17.608989 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 03:09:18.053405 master-0 kubenswrapper[3991]: I0308 03:09:18.053351 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:18.340170 master-0 kubenswrapper[3991]: E0308 03:09:18.339946 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef77d6faf7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,LastTimestamp:2026-03-08 03:08:59.040750327 +0000 UTC m=+0.606687592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.347448 master-0 kubenswrapper[3991]: E0308 03:09:18.347245 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.353072 master-0 kubenswrapper[3991]: I0308 03:09:18.353016 3991 csr.go:261] certificate signing request csr-x4zwt is approved, waiting to be issued Mar 08 03:09:18.354320 master-0 kubenswrapper[3991]: E0308 03:09:18.354237 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.361245 master-0 kubenswrapper[3991]: E0308 03:09:18.360958 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.368427 master-0 kubenswrapper[3991]: E0308 03:09:18.368252 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef81b92a11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.206568465 +0000 UTC m=+0.772505740,LastTimestamp:2026-03-08 03:08:59.206568465 +0000 UTC m=+0.772505740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.374305 master-0 kubenswrapper[3991]: E0308 03:09:18.374120 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.304989081 +0000 UTC m=+0.870926336,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.379871 master-0 kubenswrapper[3991]: E0308 03:09:18.379687 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.305022253 +0000 UTC m=+0.870959508,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.388174 master-0 kubenswrapper[3991]: E0308 03:09:18.387987 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.305039084 +0000 UTC m=+0.870976339,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.393477 master-0 kubenswrapper[3991]: E0308 03:09:18.393296 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.318376597 +0000 UTC m=+0.884313852,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.400981 master-0 kubenswrapper[3991]: E0308 03:09:18.400752 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.31841303 +0000 UTC m=+0.884350285,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.408886 master-0 kubenswrapper[3991]: E0308 03:09:18.408746 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.318429521 +0000 UTC m=+0.884366776,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.415868 master-0 kubenswrapper[3991]: E0308 03:09:18.415718 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.320102205 +0000 UTC m=+0.886039460,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.421468 master-0 kubenswrapper[3991]: E0308 03:09:18.421310 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.320124367 +0000 UTC m=+0.886061632,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.427083 master-0 kubenswrapper[3991]: E0308 03:09:18.426948 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.320140678 +0000 UTC m=+0.886077933,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.434095 master-0 kubenswrapper[3991]: E0308 03:09:18.433997 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.320155019 +0000 UTC m=+0.886092284,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.439320 master-0 kubenswrapper[3991]: E0308 03:09:18.439163 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.320160809 +0000 UTC m=+0.886098074,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.445486 master-0 kubenswrapper[3991]: E0308 03:09:18.445338 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.32017793 +0000 UTC m=+0.886115185,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.451934 master-0 kubenswrapper[3991]: E0308 03:09:18.451753 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.321256344 +0000 UTC m=+0.887193609,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.460014 master-0 kubenswrapper[3991]: E0308 03:09:18.459218 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.321326209 +0000 UTC m=+0.887263464,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.466136 master-0 kubenswrapper[3991]: E0308 03:09:18.465988 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.321392794 +0000 UTC m=+0.887330049,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.471356 master-0 kubenswrapper[3991]: E0308 03:09:18.471213 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.321580456 +0000 UTC m=+0.887517711,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.477757 master-0 kubenswrapper[3991]: E0308 03:09:18.477614 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.321599208 +0000 UTC m=+0.887536463,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.484394 master-0 kubenswrapper[3991]: W0308 03:09:18.484320 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:18.484394 master-0 kubenswrapper[3991]: E0308 03:09:18.484389 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 03:09:18.484694 master-0 kubenswrapper[3991]: E0308 03:09:18.484510 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb17b8e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb17b8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105401742 +0000 UTC m=+0.671338977,LastTimestamp:2026-03-08 03:08:59.321613829 +0000 UTC m=+0.887551084,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.490695 master-0 kubenswrapper[3991]: E0308 03:09:18.490556 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb04984\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb04984 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105323396 +0000 UTC m=+0.671260631,LastTimestamp:2026-03-08 03:08:59.322837292 +0000 UTC m=+0.888774547,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.496200 master-0 kubenswrapper[3991]: E0308 03:09:18.495999 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189abeef7bb0e72f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189abeef7bb0e72f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:08:59.105363759 +0000 UTC m=+0.671300994,LastTimestamp:2026-03-08 03:08:59.322869095 +0000 UTC m=+0.888806360,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.504247 master-0 kubenswrapper[3991]: E0308 03:09:18.504126 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abeefc517448c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:00.3368091 +0000 UTC m=+1.902746365,LastTimestamp:2026-03-08 03:09:00.3368091 +0000 UTC m=+1.902746365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.511139 master-0 kubenswrapper[3991]: E0308 03:09:18.510892 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189abeefc5dc91f9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:00.349739513 +0000 UTC m=+1.915676778,LastTimestamp:2026-03-08 03:09:00.349739513 +0000 UTC m=+1.915676778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.518260 master-0 kubenswrapper[3991]: E0308 03:09:18.518115 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abeefc6e9abd4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:00.367375316 +0000 UTC m=+1.933312571,LastTimestamp:2026-03-08 03:09:00.367375316 +0000 UTC m=+1.933312571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.525144 master-0 kubenswrapper[3991]: E0308 03:09:18.524974 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abeefc99e6375 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:00.412773237 +0000 UTC m=+1.978710492,LastTimestamp:2026-03-08 03:09:00.412773237 +0000 UTC m=+1.978710492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.531749 master-0 kubenswrapper[3991]: E0308 03:09:18.531571 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abeefcb135671 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:00.437214833 +0000 UTC m=+2.003152088,LastTimestamp:2026-03-08 03:09:00.437214833 +0000 UTC m=+2.003152088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.539447 master-0 kubenswrapper[3991]: E0308 03:09:18.539292 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef01ef0d4f3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 1.507s (1.507s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:01.844239603 +0000 UTC m=+3.410176828,LastTimestamp:2026-03-08 03:09:01.844239603 +0000 UTC m=+3.410176828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.547479 master-0 kubenswrapper[3991]: E0308 03:09:18.547185 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef02ca7dd2c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:02.074338604 +0000 UTC m=+3.640275829,LastTimestamp:2026-03-08 03:09:02.074338604 +0000 UTC m=+3.640275829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.554020 master-0 kubenswrapper[3991]: E0308 03:09:18.553824 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef02d5791b4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:02.08585362 +0000 UTC m=+3.651790835,LastTimestamp:2026-03-08 03:09:02.08585362 +0000 UTC m=+3.651790835,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.562116 master-0 kubenswrapper[3991]: E0308 03:09:18.561891 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef05b4d4de3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.444s (2.444s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:02.856932835 +0000 UTC m=+4.422870060,LastTimestamp:2026-03-08 03:09:02.856932835 +0000 UTC m=+4.422870060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.568755 master-0 kubenswrapper[3991]: E0308 03:09:18.568601 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef0649f5c43 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.013305411 +0000 UTC m=+4.579242636,LastTimestamp:2026-03-08 03:09:03.013305411 +0000 UTC m=+4.579242636,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.574553 master-0 kubenswrapper[3991]: E0308 03:09:18.574411 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef0655e5b64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.025822564 +0000 UTC m=+4.591759789,LastTimestamp:2026-03-08 03:09:03.025822564 +0000 UTC m=+4.591759789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.581382 master-0 kubenswrapper[3991]: E0308 03:09:18.581238 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef065841ad0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.0282964 +0000 UTC m=+4.594233625,LastTimestamp:2026-03-08 03:09:03.0282964 +0000 UTC m=+4.594233625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.587842 master-0 kubenswrapper[3991]: E0308 03:09:18.587647 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef06ee7aa17 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.185816087 +0000 UTC m=+4.751753312,LastTimestamp:2026-03-08 03:09:03.185816087 +0000 UTC m=+4.751753312,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.594135 master-0 kubenswrapper[3991]: E0308 03:09:18.593872 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abef06f9ecdb9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.197818297 +0000 UTC m=+4.763755522,LastTimestamp:2026-03-08 03:09:03.197818297 +0000 UTC m=+4.763755522,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.601774 master-0 kubenswrapper[3991]: E0308 03:09:18.601626 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0724a7f53 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.242624851 +0000 UTC m=+4.808562076,LastTimestamp:2026-03-08 03:09:03.242624851 +0000 UTC m=+4.808562076,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.609852 master-0 kubenswrapper[3991]: E0308 03:09:18.609606 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07d969ac7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.432161991 +0000 UTC m=+4.998099226,LastTimestamp:2026-03-08 03:09:03.432161991 +0000 UTC m=+4.998099226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.617388 master-0 kubenswrapper[3991]: E0308 03:09:18.617215 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07e29e608 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.441815048 +0000 UTC m=+5.007752273,LastTimestamp:2026-03-08 03:09:03.441815048 +0000 UTC m=+5.007752273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.626455 master-0 kubenswrapper[3991]: E0308 03:09:18.626256 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef0724a7f53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0724a7f53 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.242624851 +0000 UTC m=+4.808562076,LastTimestamp:2026-03-08 03:09:04.248315057 +0000 UTC m=+5.814252282,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.632930 master-0 kubenswrapper[3991]: E0308 03:09:18.632722 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef07d969ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07d969ac7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.432161991 +0000 UTC m=+4.998099226,LastTimestamp:2026-03-08 03:09:04.4784806 +0000 UTC m=+6.044417825,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.639465 master-0 kubenswrapper[3991]: E0308 03:09:18.639304 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef07e29e608\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07e29e608 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.441815048 +0000 UTC m=+5.007752273,LastTimestamp:2026-03-08 03:09:04.493587573 +0000 UTC m=+6.059524798,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.645732 master-0 kubenswrapper[3991]: E0308 03:09:18.645548 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0e9f5ed7b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:05.250348411 +0000 UTC m=+6.816285636,LastTimestamp:2026-03-08 03:09:05.250348411 +0000 UTC m=+6.816285636,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.653345 master-0 kubenswrapper[3991]: E0308 03:09:18.653149 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef0e9f5ed7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0e9f5ed7b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:05.250348411 +0000 UTC m=+6.816285636,LastTimestamp:2026-03-08 03:09:06.2525999 +0000 UTC m=+7.818537125,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.659864 master-0 kubenswrapper[3991]: E0308 03:09:18.659676 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189abef178de526f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.298s (7.298s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:07.647943279 +0000 UTC m=+9.213880544,LastTimestamp:2026-03-08 03:09:07.647943279 +0000 UTC m=+9.213880544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.666247 master-0 kubenswrapper[3991]: E0308 03:09:18.666088 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef182774f17 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.441s (7.441s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:07.808964375 +0000 UTC m=+9.374901620,LastTimestamp:2026-03-08 03:09:07.808964375 +0000 UTC m=+9.374901620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.672364 master-0 kubenswrapper[3991]: E0308 03:09:18.672277 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 03:09:18.672658 master-0 kubenswrapper[3991]: E0308 03:09:18.672531 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189abef1851117ae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:07.852597166 +0000 UTC m=+9.418534401,LastTimestamp:2026-03-08 03:09:07.852597166 +0000 UTC m=+9.418534401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.680455 master-0 kubenswrapper[3991]: E0308 03:09:18.680335 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189abef185c9b5c0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:07.864696256 +0000 UTC m=+9.430633491,LastTimestamp:2026-03-08 03:09:07.864696256 +0000 UTC m=+9.430633491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.687934 master-0 kubenswrapper[3991]: E0308 03:09:18.687706 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef18a097428 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.498s (7.498s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:07.935982632 +0000 UTC m=+9.501919857,LastTimestamp:2026-03-08 03:09:07.935982632 +0000 UTC m=+9.501919857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.695706 master-0 kubenswrapper[3991]: E0308 03:09:18.695524 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef18fcc17d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.0326246 +0000 UTC m=+9.598561825,LastTimestamp:2026-03-08 03:09:08.0326246 +0000 UTC m=+9.598561825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.703043 master-0 kubenswrapper[3991]: E0308 03:09:18.702712 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef190b3f3ff openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.047819775 +0000 UTC m=+9.613757010,LastTimestamp:2026-03-08 03:09:08.047819775 +0000 UTC m=+9.613757010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.708233 master-0 kubenswrapper[3991]: E0308 03:09:18.708088 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef19839c438 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.17402988 +0000 UTC m=+9.739967105,LastTimestamp:2026-03-08 03:09:08.17402988 +0000 UTC m=+9.739967105,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.716436 master-0 kubenswrapper[3991]: E0308 03:09:18.716255 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef198d141bf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.183957951 +0000 UTC m=+9.749895176,LastTimestamp:2026-03-08 03:09:08.183957951 +0000 UTC m=+9.749895176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.723858 master-0 kubenswrapper[3991]: E0308 03:09:18.723715 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef198de6f0b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.184821515 +0000 UTC m=+9.750758740,LastTimestamp:2026-03-08 03:09:08.184821515 +0000 UTC m=+9.750758740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.733234 master-0 kubenswrapper[3991]: E0308 03:09:18.731528 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef19de20a09 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.268943881 +0000 UTC m=+9.834881106,LastTimestamp:2026-03-08 03:09:08.268943881 +0000 UTC m=+9.834881106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.739958 master-0 kubenswrapper[3991]: E0308 03:09:18.739835 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef1ac4f6d18 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.510993688 +0000 UTC m=+10.076930933,LastTimestamp:2026-03-08 03:09:08.510993688 +0000 UTC m=+10.076930933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.746161 master-0 kubenswrapper[3991]: E0308 03:09:18.746064 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef1acfb343e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.522251326 +0000 UTC m=+10.088188561,LastTimestamp:2026-03-08 03:09:08.522251326 +0000 UTC m=+10.088188561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.751887 master-0 kubenswrapper[3991]: E0308 03:09:18.751714 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef1ad0e27fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:08.52349337 +0000 UTC m=+10.089430605,LastTimestamp:2026-03-08 03:09:08.52349337 +0000 UTC m=+10.089430605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.759022 master-0 kubenswrapper[3991]: E0308 03:09:18.758876 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef2607abb70 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 3.348s (3.348s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.533730672 +0000 UTC m=+13.099667937,LastTimestamp:2026-03-08 03:09:11.533730672 +0000 UTC m=+13.099667937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.766372 master-0 kubenswrapper[3991]: E0308 03:09:18.766220 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef2620ee62b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 3.036s (3.036s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.560218155 +0000 UTC m=+13.126155420,LastTimestamp:2026-03-08 03:09:11.560218155 +0000 UTC m=+13.126155420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.772677 master-0 kubenswrapper[3991]: E0308 03:09:18.772452 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef26a227250 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.695716944 +0000 UTC m=+13.261654159,LastTimestamp:2026-03-08 03:09:11.695716944 +0000 UTC m=+13.261654159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.779860 master-0 kubenswrapper[3991]: E0308 03:09:18.779739 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abef26aa282ff kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.704109823 +0000 UTC m=+13.270047048,LastTimestamp:2026-03-08 03:09:11.704109823 +0000 UTC m=+13.270047048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.786168 master-0 kubenswrapper[3991]: E0308 03:09:18.785940 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef26d9ecc54 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.7541981 +0000 UTC m=+13.320135315,LastTimestamp:2026-03-08 03:09:11.7541981 +0000 UTC m=+13.320135315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.792512 master-0 kubenswrapper[3991]: E0308 03:09:18.792411 3991 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189abef270e55f68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:11.80915492 +0000 UTC m=+13.375092145,LastTimestamp:2026-03-08 03:09:11.80915492 +0000 UTC m=+13.375092145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:18.936134 master-0 kubenswrapper[3991]: I0308 03:09:18.935297 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:18.937272 master-0 kubenswrapper[3991]: I0308 03:09:18.937123 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:18.937272 master-0 kubenswrapper[3991]: I0308 03:09:18.937205 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:18.937272 master-0 kubenswrapper[3991]: I0308 03:09:18.937223 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:18.937272 master-0 kubenswrapper[3991]: I0308 03:09:18.937292 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:18.945060 master-0 kubenswrapper[3991]: E0308 03:09:18.944980 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 03:09:19.053068 master-0 kubenswrapper[3991]: I0308 03:09:19.052042 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:19.207238 master-0 kubenswrapper[3991]: E0308 03:09:19.207027 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:09:19.216514 master-0 kubenswrapper[3991]: I0308 03:09:19.216458 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:19.217732 master-0 kubenswrapper[3991]: I0308 03:09:19.217677 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:19.217838 master-0 kubenswrapper[3991]: I0308 03:09:19.217752 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:19.217838 master-0 kubenswrapper[3991]: I0308 03:09:19.217777 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:19.218341 master-0 kubenswrapper[3991]: I0308 03:09:19.218294 3991 scope.go:117] "RemoveContainer" containerID="fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd" Mar 08 03:09:19.232141 master-0 kubenswrapper[3991]: E0308 03:09:19.231785 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef0724a7f53\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0724a7f53 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.242624851 +0000 UTC m=+4.808562076,LastTimestamp:2026-03-08 03:09:19.2229834 +0000 UTC m=+20.788920665,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:19.499215 master-0 kubenswrapper[3991]: E0308 03:09:19.499040 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef07d969ac7\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07d969ac7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.432161991 +0000 UTC m=+4.998099226,LastTimestamp:2026-03-08 03:09:19.490622352 +0000 UTC m=+21.056559577,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:19.507948 master-0 kubenswrapper[3991]: E0308 03:09:19.507772 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef07e29e608\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef07e29e608 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:03.441815048 +0000 UTC m=+5.007752273,LastTimestamp:2026-03-08 03:09:19.501722266 +0000 UTC m=+21.067659511,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:19.639382 master-0 kubenswrapper[3991]: W0308 03:09:19.639294 3991 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 08 03:09:19.639693 master-0 kubenswrapper[3991]: E0308 03:09:19.639387 3991 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 03:09:20.052169 master-0 kubenswrapper[3991]: I0308 03:09:20.052092 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:20.306707 master-0 kubenswrapper[3991]: I0308 03:09:20.306519 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 03:09:20.307768 master-0 kubenswrapper[3991]: I0308 03:09:20.307713 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 08 03:09:20.308866 master-0 kubenswrapper[3991]: I0308 03:09:20.308796 3991 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" exitCode=1 Mar 08 03:09:20.309028 master-0 kubenswrapper[3991]: I0308 03:09:20.308865 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d"} Mar 08 03:09:20.309028 master-0 kubenswrapper[3991]: I0308 03:09:20.308970 3991 scope.go:117] "RemoveContainer" containerID="fc0bf8511b85795538b8ced1d5c7a3f4a4a514af3750f1ae83943e90e54bd6bd" Mar 08 03:09:20.309414 master-0 kubenswrapper[3991]: I0308 03:09:20.309337 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:20.312422 master-0 kubenswrapper[3991]: I0308 03:09:20.312058 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:20.312422 master-0 kubenswrapper[3991]: I0308 03:09:20.312121 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:20.312422 master-0 kubenswrapper[3991]: I0308 03:09:20.312140 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:20.312745 master-0 kubenswrapper[3991]: I0308 03:09:20.312606 3991 scope.go:117] "RemoveContainer" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" Mar 08 03:09:20.312868 master-0 kubenswrapper[3991]: E0308 03:09:20.312832 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 03:09:20.325090 master-0 kubenswrapper[3991]: E0308 03:09:20.324800 3991 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189abef0e9f5ed7b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189abef0e9f5ed7b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:09:05.250348411 +0000 UTC m=+6.816285636,LastTimestamp:2026-03-08 03:09:20.312789373 +0000 UTC m=+21.878726608,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:09:21.050564 master-0 kubenswrapper[3991]: I0308 03:09:21.050504 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:21.320695 master-0 kubenswrapper[3991]: I0308 03:09:21.320517 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 03:09:21.701487 master-0 kubenswrapper[3991]: I0308 03:09:21.701301 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:21.701655 master-0 kubenswrapper[3991]: I0308 03:09:21.701641 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:21.704579 master-0 kubenswrapper[3991]: I0308 03:09:21.704500 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:21.705530 master-0 kubenswrapper[3991]: I0308 03:09:21.705224 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:21.705530 master-0 kubenswrapper[3991]: I0308 03:09:21.705254 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:21.706198 master-0 kubenswrapper[3991]: I0308 03:09:21.706138 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:21.707426 master-0 kubenswrapper[3991]: I0308 03:09:21.707393 3991 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:22.051592 master-0 kubenswrapper[3991]: I0308 03:09:22.051537 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:22.079801 master-0 kubenswrapper[3991]: I0308 03:09:22.079737 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:22.087723 master-0 kubenswrapper[3991]: I0308 03:09:22.087628 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:09:22.325015 master-0 kubenswrapper[3991]: I0308 03:09:22.324814 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:22.326787 master-0 kubenswrapper[3991]: I0308 03:09:22.326125 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:22.326787 master-0 kubenswrapper[3991]: I0308 03:09:22.326183 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:22.326787 master-0 kubenswrapper[3991]: I0308 03:09:22.326206 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:23.054013 master-0 kubenswrapper[3991]: I0308 03:09:23.053951 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:23.327887 master-0 kubenswrapper[3991]: I0308 03:09:23.327190 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:23.328663 master-0 kubenswrapper[3991]: I0308 03:09:23.328420 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:23.328663 master-0 kubenswrapper[3991]: I0308 03:09:23.328466 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:23.328663 master-0 kubenswrapper[3991]: I0308 03:09:23.328487 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:24.051583 master-0 kubenswrapper[3991]: I0308 03:09:24.051536 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:25.052053 master-0 kubenswrapper[3991]: I0308 03:09:25.051985 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:25.680548 master-0 kubenswrapper[3991]: E0308 03:09:25.680451 3991 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 03:09:25.946250 master-0 kubenswrapper[3991]: I0308 03:09:25.945966 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:25.947803 master-0 kubenswrapper[3991]: I0308 03:09:25.947742 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:25.947866 master-0 kubenswrapper[3991]: I0308 03:09:25.947829 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:25.947866 master-0 kubenswrapper[3991]: I0308 03:09:25.947856 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:25.948021 master-0 kubenswrapper[3991]: I0308 03:09:25.947977 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:25.955249 master-0 kubenswrapper[3991]: E0308 03:09:25.955192 3991 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 08 03:09:26.051115 master-0 kubenswrapper[3991]: I0308 03:09:26.051025 3991 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 03:09:26.933084 master-0 kubenswrapper[3991]: I0308 03:09:26.933020 3991 csr.go:257] certificate signing request csr-x4zwt is issued Mar 08 03:09:27.053018 master-0 kubenswrapper[3991]: I0308 03:09:27.052974 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.069755 master-0 kubenswrapper[3991]: I0308 03:09:27.069706 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.131302 master-0 kubenswrapper[3991]: I0308 03:09:27.131251 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.392744 master-0 kubenswrapper[3991]: I0308 03:09:27.392649 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.392744 master-0 kubenswrapper[3991]: E0308 03:09:27.392699 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 03:09:27.414884 master-0 kubenswrapper[3991]: I0308 03:09:27.414804 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.434322 master-0 kubenswrapper[3991]: I0308 03:09:27.434247 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.496702 master-0 kubenswrapper[3991]: I0308 03:09:27.496620 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.755290 master-0 kubenswrapper[3991]: I0308 03:09:27.755141 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.755290 master-0 kubenswrapper[3991]: E0308 03:09:27.755180 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 03:09:27.857159 master-0 kubenswrapper[3991]: I0308 03:09:27.857081 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.874768 master-0 kubenswrapper[3991]: I0308 03:09:27.874716 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:27.914181 master-0 kubenswrapper[3991]: I0308 03:09:27.914068 3991 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 08 03:09:27.934538 master-0 kubenswrapper[3991]: I0308 03:09:27.934480 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 23:10:03.40405918 +0000 UTC Mar 08 03:09:27.934538 master-0 kubenswrapper[3991]: I0308 03:09:27.934526 3991 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h0m35.469539809s for next certificate rotation Mar 08 03:09:27.940717 master-0 kubenswrapper[3991]: I0308 03:09:27.940640 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:28.202596 master-0 kubenswrapper[3991]: I0308 03:09:28.202509 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:28.202596 master-0 kubenswrapper[3991]: E0308 03:09:28.202549 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 03:09:28.769523 master-0 kubenswrapper[3991]: I0308 03:09:28.769485 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:28.784414 master-0 kubenswrapper[3991]: I0308 03:09:28.784394 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:28.839380 master-0 kubenswrapper[3991]: I0308 03:09:28.839305 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:29.117184 master-0 kubenswrapper[3991]: I0308 03:09:29.117093 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:29.117720 master-0 kubenswrapper[3991]: E0308 03:09:29.117698 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 03:09:29.207540 master-0 kubenswrapper[3991]: E0308 03:09:29.207492 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:09:32.390885 master-0 kubenswrapper[3991]: I0308 03:09:32.390794 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:32.408881 master-0 kubenswrapper[3991]: I0308 03:09:32.408750 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:32.468471 master-0 kubenswrapper[3991]: I0308 03:09:32.468409 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:32.688192 master-0 kubenswrapper[3991]: E0308 03:09:32.688032 3991 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 08 03:09:32.747513 master-0 kubenswrapper[3991]: I0308 03:09:32.747463 3991 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 08 03:09:32.747513 master-0 kubenswrapper[3991]: E0308 03:09:32.747510 3991 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 08 03:09:32.956473 master-0 kubenswrapper[3991]: I0308 03:09:32.956273 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:32.958314 master-0 kubenswrapper[3991]: I0308 03:09:32.958236 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:32.958314 master-0 kubenswrapper[3991]: I0308 03:09:32.958285 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:32.958314 master-0 kubenswrapper[3991]: I0308 03:09:32.958302 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:32.958622 master-0 kubenswrapper[3991]: I0308 03:09:32.958358 3991 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:09:32.975271 master-0 kubenswrapper[3991]: I0308 03:09:32.975204 3991 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 03:09:32.975271 master-0 kubenswrapper[3991]: E0308 03:09:32.975260 3991 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 08 03:09:32.987328 master-0 kubenswrapper[3991]: E0308 03:09:32.987286 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.051587 master-0 kubenswrapper[3991]: I0308 03:09:33.051538 3991 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 03:09:33.078048 master-0 kubenswrapper[3991]: I0308 03:09:33.077448 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 08 03:09:33.088257 master-0 kubenswrapper[3991]: E0308 03:09:33.088183 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.094283 master-0 kubenswrapper[3991]: I0308 03:09:33.094237 3991 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 03:09:33.189179 master-0 kubenswrapper[3991]: E0308 03:09:33.189088 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.290064 master-0 kubenswrapper[3991]: E0308 03:09:33.289961 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.391064 master-0 kubenswrapper[3991]: E0308 03:09:33.390988 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.492035 master-0 kubenswrapper[3991]: E0308 03:09:33.491952 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.593710 master-0 kubenswrapper[3991]: E0308 03:09:33.593562 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.693774 master-0 kubenswrapper[3991]: E0308 03:09:33.693710 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.795065 master-0 kubenswrapper[3991]: E0308 03:09:33.795013 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.895799 master-0 kubenswrapper[3991]: E0308 03:09:33.895663 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:33.945680 master-0 kubenswrapper[3991]: I0308 03:09:33.945616 3991 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 03:09:33.996362 master-0 kubenswrapper[3991]: E0308 03:09:33.996271 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.097211 master-0 kubenswrapper[3991]: E0308 03:09:34.097117 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.198227 master-0 kubenswrapper[3991]: E0308 03:09:34.198021 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.298753 master-0 kubenswrapper[3991]: E0308 03:09:34.298660 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.399081 master-0 kubenswrapper[3991]: E0308 03:09:34.399001 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.501331 master-0 kubenswrapper[3991]: E0308 03:09:34.501135 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.601946 master-0 kubenswrapper[3991]: E0308 03:09:34.601841 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.702263 master-0 kubenswrapper[3991]: E0308 03:09:34.702171 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.803074 master-0 kubenswrapper[3991]: E0308 03:09:34.803010 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:34.903995 master-0 kubenswrapper[3991]: E0308 03:09:34.903930 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.005234 master-0 kubenswrapper[3991]: E0308 03:09:35.005138 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.105884 master-0 kubenswrapper[3991]: E0308 03:09:35.105690 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.206946 master-0 kubenswrapper[3991]: E0308 03:09:35.206869 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.307690 master-0 kubenswrapper[3991]: E0308 03:09:35.307578 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.408344 master-0 kubenswrapper[3991]: E0308 03:09:35.408234 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.508642 master-0 kubenswrapper[3991]: E0308 03:09:35.508563 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.609749 master-0 kubenswrapper[3991]: E0308 03:09:35.609668 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.710628 master-0 kubenswrapper[3991]: E0308 03:09:35.710513 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.811414 master-0 kubenswrapper[3991]: E0308 03:09:35.811323 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:35.912144 master-0 kubenswrapper[3991]: E0308 03:09:35.912059 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.012665 master-0 kubenswrapper[3991]: E0308 03:09:36.012538 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.113682 master-0 kubenswrapper[3991]: E0308 03:09:36.113579 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.213832 master-0 kubenswrapper[3991]: E0308 03:09:36.213734 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.217325 master-0 kubenswrapper[3991]: I0308 03:09:36.217275 3991 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:09:36.218791 master-0 kubenswrapper[3991]: I0308 03:09:36.218738 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:09:36.218910 master-0 kubenswrapper[3991]: I0308 03:09:36.218805 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:09:36.218910 master-0 kubenswrapper[3991]: I0308 03:09:36.218830 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:09:36.219502 master-0 kubenswrapper[3991]: I0308 03:09:36.219447 3991 scope.go:117] "RemoveContainer" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" Mar 08 03:09:36.219791 master-0 kubenswrapper[3991]: E0308 03:09:36.219733 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 08 03:09:36.315141 master-0 kubenswrapper[3991]: E0308 03:09:36.314952 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.415497 master-0 kubenswrapper[3991]: E0308 03:09:36.415357 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.516664 master-0 kubenswrapper[3991]: E0308 03:09:36.516517 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.617255 master-0 kubenswrapper[3991]: E0308 03:09:36.617081 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.718396 master-0 kubenswrapper[3991]: E0308 03:09:36.718296 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.819583 master-0 kubenswrapper[3991]: E0308 03:09:36.819483 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:36.920580 master-0 kubenswrapper[3991]: E0308 03:09:36.920394 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.021373 master-0 kubenswrapper[3991]: E0308 03:09:37.021242 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.121757 master-0 kubenswrapper[3991]: E0308 03:09:37.121608 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.222964 master-0 kubenswrapper[3991]: E0308 03:09:37.222755 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.324032 master-0 kubenswrapper[3991]: E0308 03:09:37.323933 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.424380 master-0 kubenswrapper[3991]: E0308 03:09:37.424303 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.525025 master-0 kubenswrapper[3991]: E0308 03:09:37.524896 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.625609 master-0 kubenswrapper[3991]: E0308 03:09:37.625233 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.726072 master-0 kubenswrapper[3991]: E0308 03:09:37.725978 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.826548 master-0 kubenswrapper[3991]: E0308 03:09:37.826341 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:37.927657 master-0 kubenswrapper[3991]: E0308 03:09:37.927544 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.028354 master-0 kubenswrapper[3991]: E0308 03:09:38.028234 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.129460 master-0 kubenswrapper[3991]: E0308 03:09:38.129269 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.230081 master-0 kubenswrapper[3991]: E0308 03:09:38.229972 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.331290 master-0 kubenswrapper[3991]: E0308 03:09:38.331175 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.431506 master-0 kubenswrapper[3991]: E0308 03:09:38.431290 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.532616 master-0 kubenswrapper[3991]: E0308 03:09:38.532505 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.633605 master-0 kubenswrapper[3991]: E0308 03:09:38.633512 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.733790 master-0 kubenswrapper[3991]: E0308 03:09:38.733644 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.834324 master-0 kubenswrapper[3991]: E0308 03:09:38.834229 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:38.934794 master-0 kubenswrapper[3991]: E0308 03:09:38.934703 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.036092 master-0 kubenswrapper[3991]: E0308 03:09:39.035935 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.136374 master-0 kubenswrapper[3991]: E0308 03:09:39.136263 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.207717 master-0 kubenswrapper[3991]: E0308 03:09:39.207625 3991 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:09:39.237475 master-0 kubenswrapper[3991]: E0308 03:09:39.237349 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.287850 master-0 kubenswrapper[3991]: I0308 03:09:39.287713 3991 csr.go:261] certificate signing request csr-knj24 is approved, waiting to be issued Mar 08 03:09:39.297224 master-0 kubenswrapper[3991]: I0308 03:09:39.297160 3991 csr.go:257] certificate signing request csr-knj24 is issued Mar 08 03:09:39.338162 master-0 kubenswrapper[3991]: E0308 03:09:39.338066 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.438590 master-0 kubenswrapper[3991]: E0308 03:09:39.438468 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.539790 master-0 kubenswrapper[3991]: E0308 03:09:39.539568 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.640942 master-0 kubenswrapper[3991]: E0308 03:09:39.640752 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.741832 master-0 kubenswrapper[3991]: E0308 03:09:39.741708 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.842412 master-0 kubenswrapper[3991]: E0308 03:09:39.842241 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:39.942702 master-0 kubenswrapper[3991]: E0308 03:09:39.942596 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.043473 master-0 kubenswrapper[3991]: E0308 03:09:40.043372 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.143991 master-0 kubenswrapper[3991]: E0308 03:09:40.143754 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.244750 master-0 kubenswrapper[3991]: E0308 03:09:40.244632 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.298783 master-0 kubenswrapper[3991]: I0308 03:09:40.298674 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 20:39:50.027292487 +0000 UTC Mar 08 03:09:40.298783 master-0 kubenswrapper[3991]: I0308 03:09:40.298738 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h30m9.728559367s for next certificate rotation Mar 08 03:09:40.345586 master-0 kubenswrapper[3991]: E0308 03:09:40.345492 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.445958 master-0 kubenswrapper[3991]: E0308 03:09:40.445702 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.547068 master-0 kubenswrapper[3991]: E0308 03:09:40.546884 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.647432 master-0 kubenswrapper[3991]: E0308 03:09:40.647349 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.747793 master-0 kubenswrapper[3991]: E0308 03:09:40.747633 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.848894 master-0 kubenswrapper[3991]: E0308 03:09:40.848780 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:40.949129 master-0 kubenswrapper[3991]: E0308 03:09:40.949027 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.050264 master-0 kubenswrapper[3991]: E0308 03:09:41.050164 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.150383 master-0 kubenswrapper[3991]: E0308 03:09:41.150273 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.250526 master-0 kubenswrapper[3991]: E0308 03:09:41.250438 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.299341 master-0 kubenswrapper[3991]: I0308 03:09:41.299243 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 23:50:56.565787569 +0000 UTC Mar 08 03:09:41.299341 master-0 kubenswrapper[3991]: I0308 03:09:41.299285 3991 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h41m15.266507139s for next certificate rotation Mar 08 03:09:41.350726 master-0 kubenswrapper[3991]: E0308 03:09:41.350576 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.451425 master-0 kubenswrapper[3991]: E0308 03:09:41.451343 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.551527 master-0 kubenswrapper[3991]: E0308 03:09:41.551451 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.651764 master-0 kubenswrapper[3991]: E0308 03:09:41.651629 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.751945 master-0 kubenswrapper[3991]: E0308 03:09:41.751840 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.852506 master-0 kubenswrapper[3991]: E0308 03:09:41.852409 3991 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:09:41.885383 master-0 kubenswrapper[3991]: I0308 03:09:41.885296 3991 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 03:09:42.054973 master-0 kubenswrapper[3991]: I0308 03:09:42.054872 3991 apiserver.go:52] "Watching apiserver" Mar 08 03:09:42.060326 master-0 kubenswrapper[3991]: I0308 03:09:42.060263 3991 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 03:09:42.060618 master-0 kubenswrapper[3991]: I0308 03:09:42.060568 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-rtvl6","openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld","openshift-network-operator/network-operator-7c649bf6d4-wxrfp"] Mar 08 03:09:42.061079 master-0 kubenswrapper[3991]: I0308 03:09:42.061025 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.061079 master-0 kubenswrapper[3991]: I0308 03:09:42.061049 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.062455 master-0 kubenswrapper[3991]: I0308 03:09:42.062382 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.065152 master-0 kubenswrapper[3991]: I0308 03:09:42.065058 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 08 03:09:42.065793 master-0 kubenswrapper[3991]: I0308 03:09:42.065694 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 03:09:42.066348 master-0 kubenswrapper[3991]: I0308 03:09:42.066296 3991 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 08 03:09:42.066348 master-0 kubenswrapper[3991]: I0308 03:09:42.066318 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 03:09:42.066722 master-0 kubenswrapper[3991]: I0308 03:09:42.066674 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 03:09:42.067618 master-0 kubenswrapper[3991]: I0308 03:09:42.067550 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 08 03:09:42.068860 master-0 kubenswrapper[3991]: I0308 03:09:42.068156 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 08 03:09:42.068860 master-0 kubenswrapper[3991]: I0308 03:09:42.068217 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:09:42.068860 master-0 kubenswrapper[3991]: I0308 03:09:42.068586 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:09:42.068860 master-0 kubenswrapper[3991]: I0308 03:09:42.068776 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:09:42.155548 master-0 kubenswrapper[3991]: I0308 03:09:42.155462 3991 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 03:09:42.212987 master-0 kubenswrapper[3991]: I0308 03:09:42.212816 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.212987 master-0 kubenswrapper[3991]: I0308 03:09:42.212941 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.213230 master-0 kubenswrapper[3991]: I0308 03:09:42.213049 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.213230 master-0 kubenswrapper[3991]: I0308 03:09:42.213114 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.213230 master-0 kubenswrapper[3991]: I0308 03:09:42.213173 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.213410 master-0 kubenswrapper[3991]: I0308 03:09:42.213258 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.213410 master-0 kubenswrapper[3991]: I0308 03:09:42.213311 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt4z\" (UniqueName: \"kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.213410 master-0 kubenswrapper[3991]: I0308 03:09:42.213371 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.213410 master-0 kubenswrapper[3991]: I0308 03:09:42.213408 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.213615 master-0 kubenswrapper[3991]: I0308 03:09:42.213446 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.213615 master-0 kubenswrapper[3991]: I0308 03:09:42.213480 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.213615 master-0 kubenswrapper[3991]: I0308 03:09:42.213510 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.213615 master-0 kubenswrapper[3991]: I0308 03:09:42.213577 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.314725 master-0 kubenswrapper[3991]: I0308 03:09:42.314570 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.314725 master-0 kubenswrapper[3991]: I0308 03:09:42.314661 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.314725 master-0 kubenswrapper[3991]: I0308 03:09:42.314696 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwt4z\" (UniqueName: \"kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314734 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314772 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314804 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314839 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314870 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314932 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314963 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.314994 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.315029 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.315063 master-0 kubenswrapper[3991]: I0308 03:09:42.315062 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315605 master-0 kubenswrapper[3991]: I0308 03:09:42.315345 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315605 master-0 kubenswrapper[3991]: I0308 03:09:42.315462 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.315605 master-0 kubenswrapper[3991]: I0308 03:09:42.315507 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: I0308 03:09:42.315812 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: I0308 03:09:42.315967 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: I0308 03:09:42.316163 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: I0308 03:09:42.316094 3991 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: I0308 03:09:42.316120 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: E0308 03:09:42.316009 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:42.316769 master-0 kubenswrapper[3991]: E0308 03:09:42.316506 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:09:42.816388599 +0000 UTC m=+44.382325864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:42.317820 master-0 kubenswrapper[3991]: I0308 03:09:42.317686 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.328982 master-0 kubenswrapper[3991]: I0308 03:09:42.327207 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.347027 master-0 kubenswrapper[3991]: I0308 03:09:42.346975 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.347869 master-0 kubenswrapper[3991]: I0308 03:09:42.347798 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwt4z\" (UniqueName: \"kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z\") pod \"assisted-installer-controller-rtvl6\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.348684 master-0 kubenswrapper[3991]: I0308 03:09:42.348627 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.431110 master-0 kubenswrapper[3991]: I0308 03:09:42.430978 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:42.449702 master-0 kubenswrapper[3991]: W0308 03:09:42.449642 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5e953eb_2d1d_4d67_969b_bdecc69b61f0.slice/crio-7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc WatchSource:0}: Error finding container 7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc: Status 404 returned error can't find the container with id 7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc Mar 08 03:09:42.460778 master-0 kubenswrapper[3991]: I0308 03:09:42.460743 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:09:42.480138 master-0 kubenswrapper[3991]: W0308 03:09:42.480069 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89fc77c9_b444_4828_8a35_c63ea9335245.slice/crio-e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511 WatchSource:0}: Error finding container e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511: Status 404 returned error can't find the container with id e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511 Mar 08 03:09:42.818475 master-0 kubenswrapper[3991]: I0308 03:09:42.818425 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:42.819079 master-0 kubenswrapper[3991]: E0308 03:09:42.818592 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:42.819079 master-0 kubenswrapper[3991]: E0308 03:09:42.818663 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:09:43.818639026 +0000 UTC m=+45.384576281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:42.867314 master-0 kubenswrapper[3991]: I0308 03:09:42.867233 3991 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 03:09:43.378169 master-0 kubenswrapper[3991]: I0308 03:09:43.378123 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerStarted","Data":"e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511"} Mar 08 03:09:43.379345 master-0 kubenswrapper[3991]: I0308 03:09:43.379277 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rtvl6" event={"ID":"f5e953eb-2d1d-4d67-969b-bdecc69b61f0","Type":"ContainerStarted","Data":"7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc"} Mar 08 03:09:43.826218 master-0 kubenswrapper[3991]: I0308 03:09:43.826143 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:43.826848 master-0 kubenswrapper[3991]: E0308 03:09:43.826365 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:43.826848 master-0 kubenswrapper[3991]: E0308 03:09:43.826466 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:09:45.826438657 +0000 UTC m=+47.392375922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:45.840245 master-0 kubenswrapper[3991]: I0308 03:09:45.840168 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:45.840840 master-0 kubenswrapper[3991]: E0308 03:09:45.840335 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:45.840840 master-0 kubenswrapper[3991]: E0308 03:09:45.840409 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:09:49.840386728 +0000 UTC m=+51.406323973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:47.389477 master-0 kubenswrapper[3991]: I0308 03:09:47.389062 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerStarted","Data":"5ea4d742313470919626ed619f63545042ece5a1573517854bb097c5ce7c3645"} Mar 08 03:09:47.393418 master-0 kubenswrapper[3991]: I0308 03:09:47.392956 3991 generic.go:334] "Generic (PLEG): container finished" podID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerID="fa364304eb5003254684c63c5eb9681efe16b224f31c3dd661492ecd5fa5deda" exitCode=0 Mar 08 03:09:47.393418 master-0 kubenswrapper[3991]: I0308 03:09:47.393010 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rtvl6" event={"ID":"f5e953eb-2d1d-4d67-969b-bdecc69b61f0","Type":"ContainerDied","Data":"fa364304eb5003254684c63c5eb9681efe16b224f31c3dd661492ecd5fa5deda"} Mar 08 03:09:47.417811 master-0 kubenswrapper[3991]: I0308 03:09:47.417747 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" podStartSLOduration=8.973674248 podStartE2EDuration="13.417729967s" podCreationTimestamp="2026-03-08 03:09:34 +0000 UTC" firstStartedPulling="2026-03-08 03:09:42.482830062 +0000 UTC m=+44.048767327" lastFinishedPulling="2026-03-08 03:09:46.926885821 +0000 UTC m=+48.492823046" observedRunningTime="2026-03-08 03:09:47.417386307 +0000 UTC m=+48.983323552" watchObservedRunningTime="2026-03-08 03:09:47.417729967 +0000 UTC m=+48.983667192" Mar 08 03:09:48.232440 master-0 kubenswrapper[3991]: I0308 03:09:48.232376 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 08 03:09:48.232760 master-0 kubenswrapper[3991]: I0308 03:09:48.232608 3991 scope.go:117] "RemoveContainer" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" Mar 08 03:09:48.417280 master-0 kubenswrapper[3991]: I0308 03:09:48.416713 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:48.560816 master-0 kubenswrapper[3991]: I0308 03:09:48.560761 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwt4z\" (UniqueName: \"kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z\") pod \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " Mar 08 03:09:48.560816 master-0 kubenswrapper[3991]: I0308 03:09:48.560822 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files\") pod \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " Mar 08 03:09:48.561069 master-0 kubenswrapper[3991]: I0308 03:09:48.560874 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf\") pod \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " Mar 08 03:09:48.561069 master-0 kubenswrapper[3991]: I0308 03:09:48.560929 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf\") pod \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " Mar 08 03:09:48.561069 master-0 kubenswrapper[3991]: I0308 03:09:48.560989 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle\") pod \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\" (UID: \"f5e953eb-2d1d-4d67-969b-bdecc69b61f0\") " Mar 08 03:09:48.562395 master-0 kubenswrapper[3991]: I0308 03:09:48.562301 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "f5e953eb-2d1d-4d67-969b-bdecc69b61f0" (UID: "f5e953eb-2d1d-4d67-969b-bdecc69b61f0"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:09:48.562395 master-0 kubenswrapper[3991]: I0308 03:09:48.562319 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "f5e953eb-2d1d-4d67-969b-bdecc69b61f0" (UID: "f5e953eb-2d1d-4d67-969b-bdecc69b61f0"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:09:48.562395 master-0 kubenswrapper[3991]: I0308 03:09:48.562388 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "f5e953eb-2d1d-4d67-969b-bdecc69b61f0" (UID: "f5e953eb-2d1d-4d67-969b-bdecc69b61f0"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:09:48.562827 master-0 kubenswrapper[3991]: I0308 03:09:48.562785 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "f5e953eb-2d1d-4d67-969b-bdecc69b61f0" (UID: "f5e953eb-2d1d-4d67-969b-bdecc69b61f0"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:09:48.565880 master-0 kubenswrapper[3991]: I0308 03:09:48.565827 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z" (OuterVolumeSpecName: "kube-api-access-kwt4z") pod "f5e953eb-2d1d-4d67-969b-bdecc69b61f0" (UID: "f5e953eb-2d1d-4d67-969b-bdecc69b61f0"). InnerVolumeSpecName "kube-api-access-kwt4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:09:48.661440 master-0 kubenswrapper[3991]: I0308 03:09:48.661397 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwt4z\" (UniqueName: \"kubernetes.io/projected/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-kube-api-access-kwt4z\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:48.661664 master-0 kubenswrapper[3991]: I0308 03:09:48.661649 3991 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:48.661739 master-0 kubenswrapper[3991]: I0308 03:09:48.661727 3991 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:48.661813 master-0 kubenswrapper[3991]: I0308 03:09:48.661802 3991 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:48.661870 master-0 kubenswrapper[3991]: I0308 03:09:48.661860 3991 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f5e953eb-2d1d-4d67-969b-bdecc69b61f0-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:49.405572 master-0 kubenswrapper[3991]: I0308 03:09:49.405142 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 03:09:49.407222 master-0 kubenswrapper[3991]: I0308 03:09:49.406558 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"94f9825100c515930737671c9db902b97098151c7357d0a97122a599d22e13f1"} Mar 08 03:09:49.408800 master-0 kubenswrapper[3991]: I0308 03:09:49.408714 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-rtvl6" event={"ID":"f5e953eb-2d1d-4d67-969b-bdecc69b61f0","Type":"ContainerDied","Data":"7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc"} Mar 08 03:09:49.408800 master-0 kubenswrapper[3991]: I0308 03:09:49.408763 3991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc" Mar 08 03:09:49.409021 master-0 kubenswrapper[3991]: I0308 03:09:49.408806 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:09:49.425525 master-0 kubenswrapper[3991]: I0308 03:09:49.424541 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.424514388 podStartE2EDuration="1.424514388s" podCreationTimestamp="2026-03-08 03:09:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:09:49.42423163 +0000 UTC m=+50.990168905" watchObservedRunningTime="2026-03-08 03:09:49.424514388 +0000 UTC m=+50.990451643" Mar 08 03:09:49.593274 master-0 kubenswrapper[3991]: I0308 03:09:49.592510 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-nsbp4"] Mar 08 03:09:49.593274 master-0 kubenswrapper[3991]: E0308 03:09:49.592612 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:09:49.593274 master-0 kubenswrapper[3991]: I0308 03:09:49.592631 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:09:49.593274 master-0 kubenswrapper[3991]: I0308 03:09:49.592674 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:09:49.593274 master-0 kubenswrapper[3991]: I0308 03:09:49.592894 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:49.670842 master-0 kubenswrapper[3991]: I0308 03:09:49.670741 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvtx7\" (UniqueName: \"kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7\") pod \"mtu-prober-nsbp4\" (UID: \"cb1042c7-d08a-436c-a737-11573992faff\") " pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:49.771873 master-0 kubenswrapper[3991]: I0308 03:09:49.771794 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvtx7\" (UniqueName: \"kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7\") pod \"mtu-prober-nsbp4\" (UID: \"cb1042c7-d08a-436c-a737-11573992faff\") " pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:49.801681 master-0 kubenswrapper[3991]: I0308 03:09:49.801563 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvtx7\" (UniqueName: \"kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7\") pod \"mtu-prober-nsbp4\" (UID: \"cb1042c7-d08a-436c-a737-11573992faff\") " pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:49.872452 master-0 kubenswrapper[3991]: I0308 03:09:49.872286 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:49.872630 master-0 kubenswrapper[3991]: E0308 03:09:49.872503 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:49.872714 master-0 kubenswrapper[3991]: E0308 03:09:49.872639 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:09:57.872605242 +0000 UTC m=+59.438542507 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:49.912388 master-0 kubenswrapper[3991]: I0308 03:09:49.912246 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:49.929334 master-0 kubenswrapper[3991]: W0308 03:09:49.929238 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb1042c7_d08a_436c_a737_11573992faff.slice/crio-74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec WatchSource:0}: Error finding container 74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec: Status 404 returned error can't find the container with id 74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec Mar 08 03:09:50.413119 master-0 kubenswrapper[3991]: I0308 03:09:50.412959 3991 generic.go:334] "Generic (PLEG): container finished" podID="cb1042c7-d08a-436c-a737-11573992faff" containerID="8f306ce0a691aaca594f05377489d0fedf338512ca0fc5f460eabd4f8b2245d1" exitCode=0 Mar 08 03:09:50.413119 master-0 kubenswrapper[3991]: I0308 03:09:50.413017 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-nsbp4" event={"ID":"cb1042c7-d08a-436c-a737-11573992faff","Type":"ContainerDied","Data":"8f306ce0a691aaca594f05377489d0fedf338512ca0fc5f460eabd4f8b2245d1"} Mar 08 03:09:50.413373 master-0 kubenswrapper[3991]: I0308 03:09:50.413123 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-nsbp4" event={"ID":"cb1042c7-d08a-436c-a737-11573992faff","Type":"ContainerStarted","Data":"74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec"} Mar 08 03:09:51.440058 master-0 kubenswrapper[3991]: I0308 03:09:51.440018 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:51.584972 master-0 kubenswrapper[3991]: I0308 03:09:51.584845 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvtx7\" (UniqueName: \"kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7\") pod \"cb1042c7-d08a-436c-a737-11573992faff\" (UID: \"cb1042c7-d08a-436c-a737-11573992faff\") " Mar 08 03:09:51.590309 master-0 kubenswrapper[3991]: I0308 03:09:51.590156 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7" (OuterVolumeSpecName: "kube-api-access-wvtx7") pod "cb1042c7-d08a-436c-a737-11573992faff" (UID: "cb1042c7-d08a-436c-a737-11573992faff"). InnerVolumeSpecName "kube-api-access-wvtx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:09:51.686131 master-0 kubenswrapper[3991]: I0308 03:09:51.686001 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvtx7\" (UniqueName: \"kubernetes.io/projected/cb1042c7-d08a-436c-a737-11573992faff-kube-api-access-wvtx7\") on node \"master-0\" DevicePath \"\"" Mar 08 03:09:52.420519 master-0 kubenswrapper[3991]: I0308 03:09:52.420429 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-nsbp4" event={"ID":"cb1042c7-d08a-436c-a737-11573992faff","Type":"ContainerDied","Data":"74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec"} Mar 08 03:09:52.420519 master-0 kubenswrapper[3991]: I0308 03:09:52.420500 3991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec" Mar 08 03:09:52.420519 master-0 kubenswrapper[3991]: I0308 03:09:52.420508 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-nsbp4" Mar 08 03:09:54.603589 master-0 kubenswrapper[3991]: I0308 03:09:54.603450 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-nsbp4"] Mar 08 03:09:54.607731 master-0 kubenswrapper[3991]: I0308 03:09:54.607640 3991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-nsbp4"] Mar 08 03:09:55.221891 master-0 kubenswrapper[3991]: I0308 03:09:55.221770 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1042c7-d08a-436c-a737-11573992faff" path="/var/lib/kubelet/pods/cb1042c7-d08a-436c-a737-11573992faff/volumes" Mar 08 03:09:57.935851 master-0 kubenswrapper[3991]: I0308 03:09:57.935716 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:09:57.936977 master-0 kubenswrapper[3991]: E0308 03:09:57.936611 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:57.936977 master-0 kubenswrapper[3991]: E0308 03:09:57.936700 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:10:13.936674711 +0000 UTC m=+75.502611966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:09:59.480157 master-0 kubenswrapper[3991]: I0308 03:09:59.479977 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-jzw4f"] Mar 08 03:09:59.480157 master-0 kubenswrapper[3991]: E0308 03:09:59.480120 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:09:59.480157 master-0 kubenswrapper[3991]: I0308 03:09:59.480148 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:09:59.481578 master-0 kubenswrapper[3991]: I0308 03:09:59.480200 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:09:59.481578 master-0 kubenswrapper[3991]: I0308 03:09:59.480524 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.483299 master-0 kubenswrapper[3991]: I0308 03:09:59.483228 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 03:09:59.484524 master-0 kubenswrapper[3991]: I0308 03:09:59.484447 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 03:09:59.484664 master-0 kubenswrapper[3991]: I0308 03:09:59.484612 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 03:09:59.484770 master-0 kubenswrapper[3991]: I0308 03:09:59.484700 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 03:09:59.648159 master-0 kubenswrapper[3991]: I0308 03:09:59.648075 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648159 master-0 kubenswrapper[3991]: I0308 03:09:59.648147 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648188 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648226 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648263 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648371 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648435 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648496 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.648516 master-0 kubenswrapper[3991]: I0308 03:09:59.648519 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648544 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648649 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648732 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648771 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648804 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648838 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649012 master-0 kubenswrapper[3991]: I0308 03:09:59.648987 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.649414 master-0 kubenswrapper[3991]: I0308 03:09:59.649052 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.674955 master-0 kubenswrapper[3991]: I0308 03:09:59.674850 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-c8gc6"] Mar 08 03:09:59.675827 master-0 kubenswrapper[3991]: I0308 03:09:59.675768 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.680626 master-0 kubenswrapper[3991]: I0308 03:09:59.680568 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 03:09:59.684294 master-0 kubenswrapper[3991]: I0308 03:09:59.684219 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 03:09:59.750291 master-0 kubenswrapper[3991]: I0308 03:09:59.750119 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750291 master-0 kubenswrapper[3991]: I0308 03:09:59.750180 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750291 master-0 kubenswrapper[3991]: I0308 03:09:59.750212 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750329 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750353 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750420 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750465 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750533 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750548 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750568 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750611 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750617 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.750664 master-0 kubenswrapper[3991]: I0308 03:09:59.750657 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.751598 master-0 kubenswrapper[3991]: I0308 03:09:59.750708 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.751598 master-0 kubenswrapper[3991]: I0308 03:09:59.750823 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.751598 master-0 kubenswrapper[3991]: I0308 03:09:59.750869 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.751598 master-0 kubenswrapper[3991]: I0308 03:09:59.750965 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.751598 master-0 kubenswrapper[3991]: I0308 03:09:59.751116 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751583 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751675 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751749 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751796 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751834 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751971 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.751973 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.752006 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.752043 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.752092 master-0 kubenswrapper[3991]: I0308 03:09:59.752053 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752117 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752153 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752206 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752243 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752278 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752316 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752377 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752448 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752527 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752532 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.753068 master-0 kubenswrapper[3991]: I0308 03:09:59.752568 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.754252 master-0 kubenswrapper[3991]: I0308 03:09:59.753696 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.754252 master-0 kubenswrapper[3991]: I0308 03:09:59.754189 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.782361 master-0 kubenswrapper[3991]: I0308 03:09:59.782239 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.804640 master-0 kubenswrapper[3991]: I0308 03:09:59.804556 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-jzw4f" Mar 08 03:09:59.823599 master-0 kubenswrapper[3991]: W0308 03:09:59.823343 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55bef81_2381_4036_b171_3dbc77e9c25d.slice/crio-f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077 WatchSource:0}: Error finding container f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077: Status 404 returned error can't find the container with id f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077 Mar 08 03:09:59.853064 master-0 kubenswrapper[3991]: I0308 03:09:59.852802 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853064 master-0 kubenswrapper[3991]: I0308 03:09:59.852874 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853064 master-0 kubenswrapper[3991]: I0308 03:09:59.852945 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853064 master-0 kubenswrapper[3991]: I0308 03:09:59.853006 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853624 master-0 kubenswrapper[3991]: I0308 03:09:59.853416 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853624 master-0 kubenswrapper[3991]: I0308 03:09:59.853550 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853624 master-0 kubenswrapper[3991]: I0308 03:09:59.853613 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853859 master-0 kubenswrapper[3991]: I0308 03:09:59.853673 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853859 master-0 kubenswrapper[3991]: I0308 03:09:59.853731 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.853859 master-0 kubenswrapper[3991]: I0308 03:09:59.853784 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.854227 master-0 kubenswrapper[3991]: I0308 03:09:59.854111 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.854227 master-0 kubenswrapper[3991]: I0308 03:09:59.854141 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.854386 master-0 kubenswrapper[3991]: I0308 03:09:59.854275 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.855288 master-0 kubenswrapper[3991]: I0308 03:09:59.855082 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.855672 master-0 kubenswrapper[3991]: I0308 03:09:59.855559 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.888989 master-0 kubenswrapper[3991]: I0308 03:09:59.888882 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:09:59.995521 master-0 kubenswrapper[3991]: I0308 03:09:59.995420 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:10:00.014830 master-0 kubenswrapper[3991]: W0308 03:10:00.014766 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5eee869_c27f_4534_bbce_d954c42b36a3.slice/crio-7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a WatchSource:0}: Error finding container 7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a: Status 404 returned error can't find the container with id 7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a Mar 08 03:10:00.451272 master-0 kubenswrapper[3991]: I0308 03:10:00.451225 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerStarted","Data":"7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a"} Mar 08 03:10:00.453052 master-0 kubenswrapper[3991]: I0308 03:10:00.453029 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jzw4f" event={"ID":"a55bef81-2381-4036-b171-3dbc77e9c25d","Type":"ContainerStarted","Data":"f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077"} Mar 08 03:10:00.463157 master-0 kubenswrapper[3991]: I0308 03:10:00.463103 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2l64n"] Mar 08 03:10:00.463694 master-0 kubenswrapper[3991]: I0308 03:10:00.463617 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:00.463808 master-0 kubenswrapper[3991]: E0308 03:10:00.463787 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:00.560159 master-0 kubenswrapper[3991]: I0308 03:10:00.560108 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:00.560766 master-0 kubenswrapper[3991]: I0308 03:10:00.560176 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:00.660958 master-0 kubenswrapper[3991]: I0308 03:10:00.660888 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:00.661155 master-0 kubenswrapper[3991]: E0308 03:10:00.661081 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:00.661189 master-0 kubenswrapper[3991]: E0308 03:10:00.661166 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:01.161143009 +0000 UTC m=+62.727080304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:00.661386 master-0 kubenswrapper[3991]: I0308 03:10:00.661308 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:00.679135 master-0 kubenswrapper[3991]: I0308 03:10:00.679028 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:01.165061 master-0 kubenswrapper[3991]: I0308 03:10:01.164937 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:01.165061 master-0 kubenswrapper[3991]: E0308 03:10:01.165051 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:01.165462 master-0 kubenswrapper[3991]: E0308 03:10:01.165097 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:02.165084811 +0000 UTC m=+63.731022036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:02.174356 master-0 kubenswrapper[3991]: I0308 03:10:02.174226 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:02.175198 master-0 kubenswrapper[3991]: E0308 03:10:02.174521 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:02.175198 master-0 kubenswrapper[3991]: E0308 03:10:02.174639 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:04.174611068 +0000 UTC m=+65.740548333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:02.216898 master-0 kubenswrapper[3991]: I0308 03:10:02.216807 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:02.217079 master-0 kubenswrapper[3991]: E0308 03:10:02.216966 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:03.467701 master-0 kubenswrapper[3991]: I0308 03:10:03.467435 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c819f7232b6c404b174ef7e43a5fe243e69bdbd6f882a1b6a72687cf4603a3a5" exitCode=0 Mar 08 03:10:03.467701 master-0 kubenswrapper[3991]: I0308 03:10:03.467558 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"c819f7232b6c404b174ef7e43a5fe243e69bdbd6f882a1b6a72687cf4603a3a5"} Mar 08 03:10:04.193965 master-0 kubenswrapper[3991]: I0308 03:10:04.193872 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:04.194158 master-0 kubenswrapper[3991]: E0308 03:10:04.194110 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:04.194248 master-0 kubenswrapper[3991]: E0308 03:10:04.194222 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:08.194196108 +0000 UTC m=+69.760133363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:04.217276 master-0 kubenswrapper[3991]: I0308 03:10:04.217236 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:04.217433 master-0 kubenswrapper[3991]: E0308 03:10:04.217396 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:06.216686 master-0 kubenswrapper[3991]: I0308 03:10:06.216648 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:06.217239 master-0 kubenswrapper[3991]: E0308 03:10:06.216751 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:08.216435 master-0 kubenswrapper[3991]: I0308 03:10:08.216265 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:08.217387 master-0 kubenswrapper[3991]: E0308 03:10:08.216441 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:08.227729 master-0 kubenswrapper[3991]: I0308 03:10:08.227650 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:08.228028 master-0 kubenswrapper[3991]: E0308 03:10:08.227964 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:08.228137 master-0 kubenswrapper[3991]: E0308 03:10:08.228108 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:16.228070075 +0000 UTC m=+77.794007340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:08.484845 master-0 kubenswrapper[3991]: I0308 03:10:08.484733 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c9ed066ab454b7a45ceb4d194fe0690fb319c3957701da913065477256cffc60" exitCode=0 Mar 08 03:10:08.484845 master-0 kubenswrapper[3991]: I0308 03:10:08.484773 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"c9ed066ab454b7a45ceb4d194fe0690fb319c3957701da913065477256cffc60"} Mar 08 03:10:10.217081 master-0 kubenswrapper[3991]: I0308 03:10:10.216880 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:10.217081 master-0 kubenswrapper[3991]: E0308 03:10:10.217037 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:11.869713 master-0 kubenswrapper[3991]: I0308 03:10:11.868370 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch"] Mar 08 03:10:11.869713 master-0 kubenswrapper[3991]: I0308 03:10:11.868764 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:11.870975 master-0 kubenswrapper[3991]: I0308 03:10:11.870890 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 03:10:11.872023 master-0 kubenswrapper[3991]: I0308 03:10:11.871979 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 03:10:11.872512 master-0 kubenswrapper[3991]: I0308 03:10:11.872472 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 03:10:11.872953 master-0 kubenswrapper[3991]: I0308 03:10:11.872877 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 03:10:11.874325 master-0 kubenswrapper[3991]: I0308 03:10:11.873121 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 03:10:11.959667 master-0 kubenswrapper[3991]: I0308 03:10:11.959603 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:11.959667 master-0 kubenswrapper[3991]: I0308 03:10:11.959654 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:11.959882 master-0 kubenswrapper[3991]: I0308 03:10:11.959700 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:11.959882 master-0 kubenswrapper[3991]: I0308 03:10:11.959719 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.059975 master-0 kubenswrapper[3991]: I0308 03:10:12.059889 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.060159 master-0 kubenswrapper[3991]: I0308 03:10:12.060036 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.060159 master-0 kubenswrapper[3991]: I0308 03:10:12.060095 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.060159 master-0 kubenswrapper[3991]: I0308 03:10:12.060116 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.061229 master-0 kubenswrapper[3991]: I0308 03:10:12.061068 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.061952 master-0 kubenswrapper[3991]: I0308 03:10:12.061896 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.067376 master-0 kubenswrapper[3991]: I0308 03:10:12.067163 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.068107 master-0 kubenswrapper[3991]: I0308 03:10:12.068072 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6mfs"] Mar 08 03:10:12.068962 master-0 kubenswrapper[3991]: I0308 03:10:12.068939 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.070886 master-0 kubenswrapper[3991]: I0308 03:10:12.070837 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 03:10:12.071482 master-0 kubenswrapper[3991]: I0308 03:10:12.071445 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 03:10:12.081704 master-0 kubenswrapper[3991]: I0308 03:10:12.081668 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.161065 master-0 kubenswrapper[3991]: I0308 03:10:12.160857 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161065 master-0 kubenswrapper[3991]: I0308 03:10:12.160934 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161318 master-0 kubenswrapper[3991]: I0308 03:10:12.161100 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161318 master-0 kubenswrapper[3991]: I0308 03:10:12.161221 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161318 master-0 kubenswrapper[3991]: I0308 03:10:12.161262 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161425 master-0 kubenswrapper[3991]: I0308 03:10:12.161340 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161425 master-0 kubenswrapper[3991]: I0308 03:10:12.161374 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161425 master-0 kubenswrapper[3991]: I0308 03:10:12.161401 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161522 master-0 kubenswrapper[3991]: I0308 03:10:12.161433 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161522 master-0 kubenswrapper[3991]: I0308 03:10:12.161463 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161522 master-0 kubenswrapper[3991]: I0308 03:10:12.161501 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161634 master-0 kubenswrapper[3991]: I0308 03:10:12.161540 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161634 master-0 kubenswrapper[3991]: I0308 03:10:12.161576 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161634 master-0 kubenswrapper[3991]: I0308 03:10:12.161606 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161738 master-0 kubenswrapper[3991]: I0308 03:10:12.161638 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161738 master-0 kubenswrapper[3991]: I0308 03:10:12.161670 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161738 master-0 kubenswrapper[3991]: I0308 03:10:12.161706 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161842 master-0 kubenswrapper[3991]: I0308 03:10:12.161741 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161842 master-0 kubenswrapper[3991]: I0308 03:10:12.161785 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.161842 master-0 kubenswrapper[3991]: I0308 03:10:12.161819 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.193147 master-0 kubenswrapper[3991]: I0308 03:10:12.193061 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:10:12.217014 master-0 kubenswrapper[3991]: I0308 03:10:12.216893 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:12.217186 master-0 kubenswrapper[3991]: E0308 03:10:12.217120 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:12.262353 master-0 kubenswrapper[3991]: I0308 03:10:12.262280 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262353 master-0 kubenswrapper[3991]: I0308 03:10:12.262336 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262569 master-0 kubenswrapper[3991]: I0308 03:10:12.262367 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262569 master-0 kubenswrapper[3991]: I0308 03:10:12.262479 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262569 master-0 kubenswrapper[3991]: I0308 03:10:12.262520 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262695 master-0 kubenswrapper[3991]: I0308 03:10:12.262632 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262861 master-0 kubenswrapper[3991]: I0308 03:10:12.262824 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262935 master-0 kubenswrapper[3991]: I0308 03:10:12.262880 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.262986 master-0 kubenswrapper[3991]: I0308 03:10:12.262940 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263031 master-0 kubenswrapper[3991]: I0308 03:10:12.262998 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263031 master-0 kubenswrapper[3991]: I0308 03:10:12.263014 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263109 master-0 kubenswrapper[3991]: I0308 03:10:12.262941 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263109 master-0 kubenswrapper[3991]: I0308 03:10:12.263054 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263109 master-0 kubenswrapper[3991]: I0308 03:10:12.263073 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263210 master-0 kubenswrapper[3991]: I0308 03:10:12.263108 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263210 master-0 kubenswrapper[3991]: I0308 03:10:12.263136 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263777 master-0 kubenswrapper[3991]: I0308 03:10:12.263745 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263854 master-0 kubenswrapper[3991]: I0308 03:10:12.263815 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263854 master-0 kubenswrapper[3991]: I0308 03:10:12.263847 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263972 master-0 kubenswrapper[3991]: I0308 03:10:12.263876 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.263972 master-0 kubenswrapper[3991]: I0308 03:10:12.263940 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264049 master-0 kubenswrapper[3991]: I0308 03:10:12.263970 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264049 master-0 kubenswrapper[3991]: I0308 03:10:12.263998 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264049 master-0 kubenswrapper[3991]: I0308 03:10:12.264024 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264154 master-0 kubenswrapper[3991]: I0308 03:10:12.264064 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264154 master-0 kubenswrapper[3991]: I0308 03:10:12.264091 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264154 master-0 kubenswrapper[3991]: I0308 03:10:12.264124 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264154 master-0 kubenswrapper[3991]: I0308 03:10:12.264145 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264275 master-0 kubenswrapper[3991]: I0308 03:10:12.264155 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264275 master-0 kubenswrapper[3991]: I0308 03:10:12.264178 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264275 master-0 kubenswrapper[3991]: I0308 03:10:12.264190 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264275 master-0 kubenswrapper[3991]: I0308 03:10:12.264224 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264275 master-0 kubenswrapper[3991]: I0308 03:10:12.264266 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264421 master-0 kubenswrapper[3991]: I0308 03:10:12.264311 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264421 master-0 kubenswrapper[3991]: I0308 03:10:12.264370 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.264421 master-0 kubenswrapper[3991]: I0308 03:10:12.264398 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.265154 master-0 kubenswrapper[3991]: I0308 03:10:12.265116 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.265210 master-0 kubenswrapper[3991]: I0308 03:10:12.265151 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.267324 master-0 kubenswrapper[3991]: I0308 03:10:12.267266 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.285776 master-0 kubenswrapper[3991]: I0308 03:10:12.285720 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c\") pod \"ovnkube-node-z6mfs\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.383635 master-0 kubenswrapper[3991]: I0308 03:10:12.383552 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:12.990166 master-0 kubenswrapper[3991]: W0308 03:10:12.990121 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18c148bd_0a23_46f1_b54e_6e8fd18825d5.slice/crio-a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97 WatchSource:0}: Error finding container a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97: Status 404 returned error can't find the container with id a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97 Mar 08 03:10:12.990896 master-0 kubenswrapper[3991]: W0308 03:10:12.990853 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod631b3a8e_43e0_4818_b6e1_bd61ac531ab6.slice/crio-da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727 WatchSource:0}: Error finding container da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727: Status 404 returned error can't find the container with id da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727 Mar 08 03:10:13.226833 master-0 kubenswrapper[3991]: I0308 03:10:13.226753 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 03:10:13.504933 master-0 kubenswrapper[3991]: I0308 03:10:13.504361 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97"} Mar 08 03:10:13.508382 master-0 kubenswrapper[3991]: I0308 03:10:13.508328 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-jzw4f" event={"ID":"a55bef81-2381-4036-b171-3dbc77e9c25d","Type":"ContainerStarted","Data":"36492ba1cddf811e5666d61f607f6684c51d34e70ae061ee009e76b3fa5c38ec"} Mar 08 03:10:13.515161 master-0 kubenswrapper[3991]: I0308 03:10:13.515102 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerStarted","Data":"11ca10c7bd0fa6982cb23c63bc95a07447ca28931364a6068d6974436e427f96"} Mar 08 03:10:13.515242 master-0 kubenswrapper[3991]: I0308 03:10:13.515173 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerStarted","Data":"da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727"} Mar 08 03:10:13.530256 master-0 kubenswrapper[3991]: I0308 03:10:13.530133 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-jzw4f" podStartSLOduration=1.27442898 podStartE2EDuration="14.530108602s" podCreationTimestamp="2026-03-08 03:09:59 +0000 UTC" firstStartedPulling="2026-03-08 03:09:59.828617284 +0000 UTC m=+61.394554539" lastFinishedPulling="2026-03-08 03:10:13.084296926 +0000 UTC m=+74.650234161" observedRunningTime="2026-03-08 03:10:13.528736106 +0000 UTC m=+75.094673371" watchObservedRunningTime="2026-03-08 03:10:13.530108602 +0000 UTC m=+75.096045857" Mar 08 03:10:13.545051 master-0 kubenswrapper[3991]: I0308 03:10:13.544884 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.544861591 podStartE2EDuration="544.861591ms" podCreationTimestamp="2026-03-08 03:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:13.543639158 +0000 UTC m=+75.109576423" watchObservedRunningTime="2026-03-08 03:10:13.544861591 +0000 UTC m=+75.110798846" Mar 08 03:10:13.979857 master-0 kubenswrapper[3991]: I0308 03:10:13.979827 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:10:13.980062 master-0 kubenswrapper[3991]: E0308 03:10:13.979965 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:10:13.980062 master-0 kubenswrapper[3991]: E0308 03:10:13.980014 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:10:45.980001376 +0000 UTC m=+107.545938601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:10:14.216281 master-0 kubenswrapper[3991]: I0308 03:10:14.216153 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:14.216733 master-0 kubenswrapper[3991]: E0308 03:10:14.216318 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:14.519776 master-0 kubenswrapper[3991]: I0308 03:10:14.519702 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="23e3dd34f3f6fc9e0e38ff8f0cff6316ca3075b2e57bb67cfa5a7c613c4186a1" exitCode=0 Mar 08 03:10:14.519987 master-0 kubenswrapper[3991]: I0308 03:10:14.519769 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"23e3dd34f3f6fc9e0e38ff8f0cff6316ca3075b2e57bb67cfa5a7c613c4186a1"} Mar 08 03:10:15.063068 master-0 kubenswrapper[3991]: I0308 03:10:15.062502 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-4lx8s"] Mar 08 03:10:15.063068 master-0 kubenswrapper[3991]: I0308 03:10:15.062791 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:15.063068 master-0 kubenswrapper[3991]: E0308 03:10:15.062836 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:15.187623 master-0 kubenswrapper[3991]: I0308 03:10:15.187519 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:15.288480 master-0 kubenswrapper[3991]: I0308 03:10:15.288441 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:15.300191 master-0 kubenswrapper[3991]: E0308 03:10:15.300151 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:15.300191 master-0 kubenswrapper[3991]: E0308 03:10:15.300185 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:15.300285 master-0 kubenswrapper[3991]: E0308 03:10:15.300199 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:15.300285 master-0 kubenswrapper[3991]: E0308 03:10:15.300270 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:15.800251973 +0000 UTC m=+77.366189198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:15.894021 master-0 kubenswrapper[3991]: I0308 03:10:15.893956 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:15.894209 master-0 kubenswrapper[3991]: E0308 03:10:15.894153 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:15.894209 master-0 kubenswrapper[3991]: E0308 03:10:15.894178 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:15.894209 master-0 kubenswrapper[3991]: E0308 03:10:15.894190 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:15.894351 master-0 kubenswrapper[3991]: E0308 03:10:15.894242 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:16.89422515 +0000 UTC m=+78.460162375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:16.217053 master-0 kubenswrapper[3991]: I0308 03:10:16.216767 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:16.217053 master-0 kubenswrapper[3991]: E0308 03:10:16.216869 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:16.296591 master-0 kubenswrapper[3991]: I0308 03:10:16.296548 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:16.297280 master-0 kubenswrapper[3991]: E0308 03:10:16.296663 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:16.297280 master-0 kubenswrapper[3991]: E0308 03:10:16.296705 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:10:32.296693115 +0000 UTC m=+93.862630340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:16.901054 master-0 kubenswrapper[3991]: I0308 03:10:16.901008 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:16.901248 master-0 kubenswrapper[3991]: E0308 03:10:16.901171 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:16.901248 master-0 kubenswrapper[3991]: E0308 03:10:16.901188 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:16.901248 master-0 kubenswrapper[3991]: E0308 03:10:16.901199 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:16.901248 master-0 kubenswrapper[3991]: E0308 03:10:16.901248 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:18.901235501 +0000 UTC m=+80.467172726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:17.218983 master-0 kubenswrapper[3991]: I0308 03:10:17.218854 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:17.218983 master-0 kubenswrapper[3991]: E0308 03:10:17.218965 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:18.216756 master-0 kubenswrapper[3991]: I0308 03:10:18.216708 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:18.217301 master-0 kubenswrapper[3991]: E0308 03:10:18.216864 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.347476 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-ppdzb"] Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.347854 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.349779 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.350074 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.350223 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.350697 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 03:10:18.356849 master-0 kubenswrapper[3991]: I0308 03:10:18.352269 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 03:10:18.412542 master-0 kubenswrapper[3991]: I0308 03:10:18.412493 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.412729 master-0 kubenswrapper[3991]: I0308 03:10:18.412612 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.412729 master-0 kubenswrapper[3991]: I0308 03:10:18.412659 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.412729 master-0 kubenswrapper[3991]: I0308 03:10:18.412707 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.513614 master-0 kubenswrapper[3991]: I0308 03:10:18.513564 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.513699 master-0 kubenswrapper[3991]: I0308 03:10:18.513624 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.513699 master-0 kubenswrapper[3991]: I0308 03:10:18.513657 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.513699 master-0 kubenswrapper[3991]: I0308 03:10:18.513675 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.514606 master-0 kubenswrapper[3991]: I0308 03:10:18.514572 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.515004 master-0 kubenswrapper[3991]: E0308 03:10:18.514976 3991 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 08 03:10:18.515045 master-0 kubenswrapper[3991]: E0308 03:10:18.515029 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert podName:4fd323ae-11bf-4207-bdce-4d51a9c19dc3 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:19.015013186 +0000 UTC m=+80.580950421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert") pod "network-node-identity-ppdzb" (UID: "4fd323ae-11bf-4207-bdce-4d51a9c19dc3") : secret "network-node-identity-cert" not found Mar 08 03:10:18.515652 master-0 kubenswrapper[3991]: I0308 03:10:18.515618 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.694202 master-0 kubenswrapper[3991]: I0308 03:10:18.692945 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:18.918315 master-0 kubenswrapper[3991]: I0308 03:10:18.918214 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:18.918698 master-0 kubenswrapper[3991]: E0308 03:10:18.918484 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:18.918698 master-0 kubenswrapper[3991]: E0308 03:10:18.918511 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:18.918698 master-0 kubenswrapper[3991]: E0308 03:10:18.918528 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:18.918698 master-0 kubenswrapper[3991]: E0308 03:10:18.918596 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:22.918574099 +0000 UTC m=+84.484511354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:19.019659 master-0 kubenswrapper[3991]: I0308 03:10:19.019465 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:19.025477 master-0 kubenswrapper[3991]: I0308 03:10:19.025433 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:19.216762 master-0 kubenswrapper[3991]: I0308 03:10:19.216573 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:19.217804 master-0 kubenswrapper[3991]: E0308 03:10:19.217719 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:19.269115 master-0 kubenswrapper[3991]: I0308 03:10:19.269032 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:10:19.284996 master-0 kubenswrapper[3991]: W0308 03:10:19.284932 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd323ae_11bf_4207_bdce_4d51a9c19dc3.slice/crio-2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d WatchSource:0}: Error finding container 2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d: Status 404 returned error can't find the container with id 2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d Mar 08 03:10:19.534579 master-0 kubenswrapper[3991]: I0308 03:10:19.533542 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerStarted","Data":"2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d"} Mar 08 03:10:19.537384 master-0 kubenswrapper[3991]: I0308 03:10:19.537322 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="e69760dd587dd773054d2c68d80450fae7ea78d2c0d9ae71eb6479ccbfb89605" exitCode=0 Mar 08 03:10:19.537384 master-0 kubenswrapper[3991]: I0308 03:10:19.537376 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"e69760dd587dd773054d2c68d80450fae7ea78d2c0d9ae71eb6479ccbfb89605"} Mar 08 03:10:20.216478 master-0 kubenswrapper[3991]: I0308 03:10:20.216416 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:20.216709 master-0 kubenswrapper[3991]: E0308 03:10:20.216522 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:21.217126 master-0 kubenswrapper[3991]: I0308 03:10:21.217033 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:21.222850 master-0 kubenswrapper[3991]: E0308 03:10:21.217267 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:22.216948 master-0 kubenswrapper[3991]: I0308 03:10:22.216881 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:22.217862 master-0 kubenswrapper[3991]: E0308 03:10:22.217038 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:22.954670 master-0 kubenswrapper[3991]: I0308 03:10:22.954608 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:22.954916 master-0 kubenswrapper[3991]: E0308 03:10:22.954751 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:22.954916 master-0 kubenswrapper[3991]: E0308 03:10:22.954766 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:22.954916 master-0 kubenswrapper[3991]: E0308 03:10:22.954776 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:22.954916 master-0 kubenswrapper[3991]: E0308 03:10:22.954816 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:30.954801697 +0000 UTC m=+92.520738922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:23.217472 master-0 kubenswrapper[3991]: I0308 03:10:23.216829 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:23.217472 master-0 kubenswrapper[3991]: E0308 03:10:23.216982 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:24.216569 master-0 kubenswrapper[3991]: I0308 03:10:24.216502 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:24.217428 master-0 kubenswrapper[3991]: E0308 03:10:24.216726 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:25.216613 master-0 kubenswrapper[3991]: I0308 03:10:25.216561 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:25.217103 master-0 kubenswrapper[3991]: E0308 03:10:25.216725 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:26.216477 master-0 kubenswrapper[3991]: I0308 03:10:26.216089 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:26.216477 master-0 kubenswrapper[3991]: E0308 03:10:26.216197 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:27.217095 master-0 kubenswrapper[3991]: I0308 03:10:27.217042 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:27.217682 master-0 kubenswrapper[3991]: E0308 03:10:27.217132 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:27.227341 master-0 kubenswrapper[3991]: W0308 03:10:27.227286 3991 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 03:10:27.230105 master-0 kubenswrapper[3991]: I0308 03:10:27.230046 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 03:10:28.216498 master-0 kubenswrapper[3991]: I0308 03:10:28.216280 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:28.216985 master-0 kubenswrapper[3991]: E0308 03:10:28.216881 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:29.217456 master-0 kubenswrapper[3991]: I0308 03:10:29.217410 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:29.224556 master-0 kubenswrapper[3991]: E0308 03:10:29.218777 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:30.216483 master-0 kubenswrapper[3991]: I0308 03:10:30.216441 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:30.216723 master-0 kubenswrapper[3991]: E0308 03:10:30.216553 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:31.037159 master-0 kubenswrapper[3991]: I0308 03:10:31.037099 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:31.037591 master-0 kubenswrapper[3991]: E0308 03:10:31.037302 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:31.037591 master-0 kubenswrapper[3991]: E0308 03:10:31.037338 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:31.037591 master-0 kubenswrapper[3991]: E0308 03:10:31.037353 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:31.037591 master-0 kubenswrapper[3991]: E0308 03:10:31.037412 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:47.037393251 +0000 UTC m=+108.603330486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:31.216929 master-0 kubenswrapper[3991]: I0308 03:10:31.216784 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:31.216929 master-0 kubenswrapper[3991]: E0308 03:10:31.216924 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:32.216516 master-0 kubenswrapper[3991]: I0308 03:10:32.216420 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:32.217439 master-0 kubenswrapper[3991]: E0308 03:10:32.216625 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:32.347263 master-0 kubenswrapper[3991]: I0308 03:10:32.347172 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:32.347521 master-0 kubenswrapper[3991]: E0308 03:10:32.347377 3991 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:32.347521 master-0 kubenswrapper[3991]: E0308 03:10:32.347459 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:04.347436629 +0000 UTC m=+125.913373884 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 03:10:33.217203 master-0 kubenswrapper[3991]: I0308 03:10:33.217118 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:33.218188 master-0 kubenswrapper[3991]: E0308 03:10:33.217315 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:34.216497 master-0 kubenswrapper[3991]: I0308 03:10:34.216400 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:34.216772 master-0 kubenswrapper[3991]: E0308 03:10:34.216571 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:34.582704 master-0 kubenswrapper[3991]: I0308 03:10:34.582589 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="be2882c714bad91ca07c5f4fb9d9845ae081aa06f8fae77c04d5d862e91663ab" exitCode=0 Mar 08 03:10:34.584007 master-0 kubenswrapper[3991]: I0308 03:10:34.582778 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"be2882c714bad91ca07c5f4fb9d9845ae081aa06f8fae77c04d5d862e91663ab"} Mar 08 03:10:34.589290 master-0 kubenswrapper[3991]: I0308 03:10:34.589227 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerStarted","Data":"c5eec4110852b5b6f65ead45beeb23e454a4f0a36ca8d676067c0e98d6a8439c"} Mar 08 03:10:34.589378 master-0 kubenswrapper[3991]: I0308 03:10:34.589293 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerStarted","Data":"c0ff9b8e2d49218f2727d432756e1a80012d8ae4568b1d0b7bd5499ffddd6b5f"} Mar 08 03:10:34.594598 master-0 kubenswrapper[3991]: I0308 03:10:34.594532 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerStarted","Data":"ae6eee5afe5e46fa6bdda2c614fc3054391ae41ef6fbf435d604af42a3bf8ed4"} Mar 08 03:10:34.600368 master-0 kubenswrapper[3991]: I0308 03:10:34.600296 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c5b6441f57692234cdd23b54b466923a1bdca368557471aa9c56fb86e4cb27c5" exitCode=0 Mar 08 03:10:34.600514 master-0 kubenswrapper[3991]: I0308 03:10:34.600367 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"c5b6441f57692234cdd23b54b466923a1bdca368557471aa9c56fb86e4cb27c5"} Mar 08 03:10:34.617280 master-0 kubenswrapper[3991]: I0308 03:10:34.617183 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=7.617155372 podStartE2EDuration="7.617155372s" podCreationTimestamp="2026-03-08 03:10:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:29.601133989 +0000 UTC m=+91.167071214" watchObservedRunningTime="2026-03-08 03:10:34.617155372 +0000 UTC m=+96.183092637" Mar 08 03:10:34.695430 master-0 kubenswrapper[3991]: I0308 03:10:34.694833 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-ppdzb" podStartSLOduration=2.46650442 podStartE2EDuration="16.694799486s" podCreationTimestamp="2026-03-08 03:10:18 +0000 UTC" firstStartedPulling="2026-03-08 03:10:19.288663322 +0000 UTC m=+80.854600587" lastFinishedPulling="2026-03-08 03:10:33.516958387 +0000 UTC m=+95.082895653" observedRunningTime="2026-03-08 03:10:34.674177453 +0000 UTC m=+96.240114708" watchObservedRunningTime="2026-03-08 03:10:34.694799486 +0000 UTC m=+96.260736751" Mar 08 03:10:34.695837 master-0 kubenswrapper[3991]: I0308 03:10:34.695760 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" podStartSLOduration=3.44313736 podStartE2EDuration="23.695748411s" podCreationTimestamp="2026-03-08 03:10:11 +0000 UTC" firstStartedPulling="2026-03-08 03:10:13.233536675 +0000 UTC m=+74.799473920" lastFinishedPulling="2026-03-08 03:10:33.486147706 +0000 UTC m=+95.052084971" observedRunningTime="2026-03-08 03:10:34.693653526 +0000 UTC m=+96.259590801" watchObservedRunningTime="2026-03-08 03:10:34.695748411 +0000 UTC m=+96.261685676" Mar 08 03:10:35.217196 master-0 kubenswrapper[3991]: I0308 03:10:35.217127 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:35.217404 master-0 kubenswrapper[3991]: E0308 03:10:35.217349 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.610627 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c"} Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.611105 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df"} Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.611127 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b"} Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.611147 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c"} Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.611165 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661"} Mar 08 03:10:35.611346 master-0 kubenswrapper[3991]: I0308 03:10:35.611184 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71"} Mar 08 03:10:35.617063 master-0 kubenswrapper[3991]: I0308 03:10:35.616969 3991 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="78b54e7113882d3d58fadca33d022029333723850c915170784718d6b2d76fb0" exitCode=0 Mar 08 03:10:35.617193 master-0 kubenswrapper[3991]: I0308 03:10:35.617076 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerDied","Data":"78b54e7113882d3d58fadca33d022029333723850c915170784718d6b2d76fb0"} Mar 08 03:10:36.217031 master-0 kubenswrapper[3991]: I0308 03:10:36.216846 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:36.217238 master-0 kubenswrapper[3991]: E0308 03:10:36.217067 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:36.626266 master-0 kubenswrapper[3991]: I0308 03:10:36.626188 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" event={"ID":"d5eee869-c27f-4534-bbce-d954c42b36a3","Type":"ContainerStarted","Data":"4bb4be7d3d2ef03db255bffb99c47c03a28d209c3161a96b9d367a00ef89276d"} Mar 08 03:10:36.654958 master-0 kubenswrapper[3991]: I0308 03:10:36.654806 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c8gc6" podStartSLOduration=4.235236241 podStartE2EDuration="37.654786314s" podCreationTimestamp="2026-03-08 03:09:59 +0000 UTC" firstStartedPulling="2026-03-08 03:10:00.017280035 +0000 UTC m=+61.583217300" lastFinishedPulling="2026-03-08 03:10:33.436830148 +0000 UTC m=+95.002767373" observedRunningTime="2026-03-08 03:10:36.653667665 +0000 UTC m=+98.219604920" watchObservedRunningTime="2026-03-08 03:10:36.654786314 +0000 UTC m=+98.220723569" Mar 08 03:10:37.007947 master-0 kubenswrapper[3991]: I0308 03:10:37.007830 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6mfs"] Mar 08 03:10:37.217350 master-0 kubenswrapper[3991]: I0308 03:10:37.217269 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:37.217654 master-0 kubenswrapper[3991]: E0308 03:10:37.217419 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:37.235031 master-0 kubenswrapper[3991]: I0308 03:10:37.234948 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 03:10:37.637732 master-0 kubenswrapper[3991]: I0308 03:10:37.637655 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9"} Mar 08 03:10:38.216642 master-0 kubenswrapper[3991]: I0308 03:10:38.216565 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:38.216941 master-0 kubenswrapper[3991]: E0308 03:10:38.216704 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:39.216752 master-0 kubenswrapper[3991]: I0308 03:10:39.216657 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:39.218141 master-0 kubenswrapper[3991]: E0308 03:10:39.218068 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:39.301409 master-0 kubenswrapper[3991]: I0308 03:10:39.301197 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=2.301166034 podStartE2EDuration="2.301166034s" podCreationTimestamp="2026-03-08 03:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:39.30065584 +0000 UTC m=+100.866593075" watchObservedRunningTime="2026-03-08 03:10:39.301166034 +0000 UTC m=+100.867103289" Mar 08 03:10:40.216692 master-0 kubenswrapper[3991]: I0308 03:10:40.216535 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:40.217099 master-0 kubenswrapper[3991]: E0308 03:10:40.216684 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:40.652338 master-0 kubenswrapper[3991]: I0308 03:10:40.651644 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerStarted","Data":"ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10"} Mar 08 03:10:40.652338 master-0 kubenswrapper[3991]: I0308 03:10:40.652349 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652391 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652065 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" containerID="cri-o://e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" gracePeriod=30 Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652120 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" containerID="cri-o://59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df" gracePeriod=30 Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652109 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" containerID="cri-o://d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" gracePeriod=30 Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652161 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" containerID="cri-o://8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c" gracePeriod=30 Mar 08 03:10:40.652728 master-0 kubenswrapper[3991]: I0308 03:10:40.652160 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" containerID="cri-o://d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661" gracePeriod=30 Mar 08 03:10:40.653308 master-0 kubenswrapper[3991]: I0308 03:10:40.652035 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" containerID="cri-o://2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71" gracePeriod=30 Mar 08 03:10:40.653308 master-0 kubenswrapper[3991]: I0308 03:10:40.652419 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:40.653308 master-0 kubenswrapper[3991]: I0308 03:10:40.652078 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b" gracePeriod=30 Mar 08 03:10:40.656972 master-0 kubenswrapper[3991]: E0308 03:10:40.656843 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 08 03:10:40.657314 master-0 kubenswrapper[3991]: E0308 03:10:40.657202 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 08 03:10:40.662940 master-0 kubenswrapper[3991]: E0308 03:10:40.660127 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 08 03:10:40.674302 master-0 kubenswrapper[3991]: E0308 03:10:40.662980 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 08 03:10:40.674302 master-0 kubenswrapper[3991]: E0308 03:10:40.663042 3991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:10:40.674302 master-0 kubenswrapper[3991]: E0308 03:10:40.664404 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 08 03:10:40.674302 master-0 kubenswrapper[3991]: E0308 03:10:40.666549 3991 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 08 03:10:40.674302 master-0 kubenswrapper[3991]: E0308 03:10:40.666603 3991 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:10:40.680845 master-0 kubenswrapper[3991]: I0308 03:10:40.680776 3991 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" containerID="cri-o://ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10" gracePeriod=30 Mar 08 03:10:41.216833 master-0 kubenswrapper[3991]: I0308 03:10:41.216743 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:41.217215 master-0 kubenswrapper[3991]: E0308 03:10:41.216894 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:41.230067 master-0 kubenswrapper[3991]: I0308 03:10:41.229965 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" podStartSLOduration=8.74720455 podStartE2EDuration="29.229941221s" podCreationTimestamp="2026-03-08 03:10:12 +0000 UTC" firstStartedPulling="2026-03-08 03:10:12.991984755 +0000 UTC m=+74.557922010" lastFinishedPulling="2026-03-08 03:10:33.474721456 +0000 UTC m=+95.040658681" observedRunningTime="2026-03-08 03:10:40.68825672 +0000 UTC m=+102.254193975" watchObservedRunningTime="2026-03-08 03:10:41.229941221 +0000 UTC m=+102.795878476" Mar 08 03:10:41.230713 master-0 kubenswrapper[3991]: I0308 03:10:41.230662 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 03:10:41.660463 master-0 kubenswrapper[3991]: I0308 03:10:41.660352 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovnkube-controller/0.log" Mar 08 03:10:41.663973 master-0 kubenswrapper[3991]: I0308 03:10:41.663680 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/kube-rbac-proxy-ovn-metrics/0.log" Mar 08 03:10:41.665073 master-0 kubenswrapper[3991]: I0308 03:10:41.665020 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/kube-rbac-proxy-node/0.log" Mar 08 03:10:41.666897 master-0 kubenswrapper[3991]: I0308 03:10:41.666191 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovn-acl-logging/0.log" Mar 08 03:10:41.667769 master-0 kubenswrapper[3991]: I0308 03:10:41.667717 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovn-controller/0.log" Mar 08 03:10:41.668461 master-0 kubenswrapper[3991]: I0308 03:10:41.668393 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10"} Mar 08 03:10:41.668549 master-0 kubenswrapper[3991]: I0308 03:10:41.668389 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10" exitCode=1 Mar 08 03:10:41.669347 master-0 kubenswrapper[3991]: I0308 03:10:41.669098 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9"} Mar 08 03:10:41.669347 master-0 kubenswrapper[3991]: I0308 03:10:41.668498 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" exitCode=0 Mar 08 03:10:41.669347 master-0 kubenswrapper[3991]: I0308 03:10:41.669339 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" exitCode=0 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669353 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c"} Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669370 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df" exitCode=0 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669395 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b" exitCode=143 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669417 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c" exitCode=143 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669436 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661" exitCode=143 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669453 3991 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71" exitCode=143 Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669400 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df"} Mar 08 03:10:41.669635 master-0 kubenswrapper[3991]: I0308 03:10:41.669620 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b"} Mar 08 03:10:41.670294 master-0 kubenswrapper[3991]: I0308 03:10:41.669658 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c"} Mar 08 03:10:41.670294 master-0 kubenswrapper[3991]: I0308 03:10:41.669686 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661"} Mar 08 03:10:41.670294 master-0 kubenswrapper[3991]: I0308 03:10:41.669711 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71"} Mar 08 03:10:41.670294 master-0 kubenswrapper[3991]: I0308 03:10:41.669739 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" event={"ID":"18c148bd-0a23-46f1-b54e-6e8fd18825d5","Type":"ContainerDied","Data":"a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97"} Mar 08 03:10:41.670294 master-0 kubenswrapper[3991]: I0308 03:10:41.669763 3991 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97" Mar 08 03:10:41.676779 master-0 kubenswrapper[3991]: I0308 03:10:41.676730 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovnkube-controller/0.log" Mar 08 03:10:41.678700 master-0 kubenswrapper[3991]: I0308 03:10:41.678648 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/kube-rbac-proxy-ovn-metrics/0.log" Mar 08 03:10:41.679313 master-0 kubenswrapper[3991]: I0308 03:10:41.679267 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/kube-rbac-proxy-node/0.log" Mar 08 03:10:41.680044 master-0 kubenswrapper[3991]: I0308 03:10:41.679994 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovn-acl-logging/0.log" Mar 08 03:10:41.680673 master-0 kubenswrapper[3991]: I0308 03:10:41.680631 3991 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-z6mfs_18c148bd-0a23-46f1-b54e-6e8fd18825d5/ovn-controller/0.log" Mar 08 03:10:41.681343 master-0 kubenswrapper[3991]: I0308 03:10:41.681290 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:41.700963 master-0 kubenswrapper[3991]: I0308 03:10:41.700858 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=0.700832987 podStartE2EDuration="700.832987ms" podCreationTimestamp="2026-03-08 03:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:41.698303761 +0000 UTC m=+103.264241036" watchObservedRunningTime="2026-03-08 03:10:41.700832987 +0000 UTC m=+103.266770252" Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746121 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746187 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746240 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746271 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746309 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746342 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746371 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746402 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746427 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746459 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746487 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746516 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746550 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746582 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746610 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746638 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746664 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746692 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.753942 master-0 kubenswrapper[3991]: I0308 03:10:41.746718 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.746749 3991 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin\") pod \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\" (UID: \"18c148bd-0a23-46f1-b54e-6e8fd18825d5\") " Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.747028 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.747101 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log" (OuterVolumeSpecName: "node-log") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.747127 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.747253 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.747308 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.748283 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.748965 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash" (OuterVolumeSpecName: "host-slash") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749006 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749033 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749058 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749064 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749131 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket" (OuterVolumeSpecName: "log-socket") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749149 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.755306 master-0 kubenswrapper[3991]: I0308 03:10:41.749214 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.756085 master-0 kubenswrapper[3991]: I0308 03:10:41.749275 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.756085 master-0 kubenswrapper[3991]: I0308 03:10:41.749671 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:10:41.756085 master-0 kubenswrapper[3991]: I0308 03:10:41.753209 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:10:41.756085 master-0 kubenswrapper[3991]: I0308 03:10:41.755043 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:10:41.759285 master-0 kubenswrapper[3991]: I0308 03:10:41.759138 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c" (OuterVolumeSpecName: "kube-api-access-pzn8c") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "kube-api-access-pzn8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:10:41.760032 master-0 kubenswrapper[3991]: I0308 03:10:41.759981 3991 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "18c148bd-0a23-46f1-b54e-6e8fd18825d5" (UID: "18c148bd-0a23-46f1-b54e-6e8fd18825d5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:10:41.780367 master-0 kubenswrapper[3991]: I0308 03:10:41.780316 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jq7bv"] Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780418 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780434 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780444 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kubecfg-setup" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780452 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kubecfg-setup" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780462 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780469 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780478 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780486 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780495 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780503 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780511 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780519 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780527 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780535 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780543 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: I0308 03:10:41.780551 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:10:41.780544 master-0 kubenswrapper[3991]: E0308 03:10:41.780561 3991 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780569 3991 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780609 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780618 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780626 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780634 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780642 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780650 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780658 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:10:41.781127 master-0 kubenswrapper[3991]: I0308 03:10:41.780665 3991 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:10:41.781483 master-0 kubenswrapper[3991]: I0308 03:10:41.781431 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847539 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847584 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847601 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847623 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847640 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847656 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.847678 master-0 kubenswrapper[3991]: I0308 03:10:41.847688 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847703 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847719 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847733 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847746 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847762 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847775 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847803 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847818 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847833 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847848 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847863 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847881 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847896 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847942 3991 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847956 3991 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847968 3991 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847979 3991 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847990 3991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848066 master-0 kubenswrapper[3991]: I0308 03:10:41.847999 3991 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848007 3991 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848016 3991 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848024 3991 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-node-log\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848033 3991 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzn8c\" (UniqueName: \"kubernetes.io/projected/18c148bd-0a23-46f1-b54e-6e8fd18825d5-kube-api-access-pzn8c\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848042 3991 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848050 3991 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848059 3991 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/18c148bd-0a23-46f1-b54e-6e8fd18825d5-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848069 3991 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848078 3991 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848088 3991 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848096 3991 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848104 3991 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848112 3991 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/18c148bd-0a23-46f1-b54e-6e8fd18825d5-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.848660 master-0 kubenswrapper[3991]: I0308 03:10:41.848121 3991 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/18c148bd-0a23-46f1-b54e-6e8fd18825d5-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:10:41.949465 master-0 kubenswrapper[3991]: I0308 03:10:41.949385 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949465 master-0 kubenswrapper[3991]: I0308 03:10:41.949438 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949765 master-0 kubenswrapper[3991]: I0308 03:10:41.949655 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949765 master-0 kubenswrapper[3991]: I0308 03:10:41.949718 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949765 master-0 kubenswrapper[3991]: I0308 03:10:41.949727 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949982 master-0 kubenswrapper[3991]: I0308 03:10:41.949771 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.949982 master-0 kubenswrapper[3991]: I0308 03:10:41.949856 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950131 master-0 kubenswrapper[3991]: I0308 03:10:41.949998 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950131 master-0 kubenswrapper[3991]: I0308 03:10:41.950021 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950243 master-0 kubenswrapper[3991]: I0308 03:10:41.950126 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950243 master-0 kubenswrapper[3991]: I0308 03:10:41.950178 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950243 master-0 kubenswrapper[3991]: I0308 03:10:41.950205 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950243 master-0 kubenswrapper[3991]: I0308 03:10:41.950221 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950243 master-0 kubenswrapper[3991]: I0308 03:10:41.950241 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950259 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950293 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950317 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950335 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950357 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950373 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950388 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950407 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950422 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950438 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950477 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.950489 master-0 kubenswrapper[3991]: I0308 03:10:41.950499 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950523 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950543 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950564 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950583 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950625 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950646 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950641 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950667 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.950710 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.951071 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.951143 master-0 kubenswrapper[3991]: I0308 03:10:41.951096 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.952022 master-0 kubenswrapper[3991]: I0308 03:10:41.951659 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.954015 master-0 kubenswrapper[3991]: I0308 03:10:41.953963 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:41.968982 master-0 kubenswrapper[3991]: I0308 03:10:41.968865 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:42.100014 master-0 kubenswrapper[3991]: I0308 03:10:42.099813 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:42.114745 master-0 kubenswrapper[3991]: W0308 03:10:42.114020 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d40fba7_84f0_46d7_9b49_dbba7aab20c5.slice/crio-6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152 WatchSource:0}: Error finding container 6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152: Status 404 returned error can't find the container with id 6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152 Mar 08 03:10:42.217102 master-0 kubenswrapper[3991]: I0308 03:10:42.217046 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:42.217346 master-0 kubenswrapper[3991]: E0308 03:10:42.217292 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:42.676083 master-0 kubenswrapper[3991]: I0308 03:10:42.675991 3991 generic.go:334] "Generic (PLEG): container finished" podID="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" containerID="3c3d9e33877d35a402198be63a50621dbf8be27a97d9c8596143b4df8d2863cd" exitCode=0 Mar 08 03:10:42.676309 master-0 kubenswrapper[3991]: I0308 03:10:42.676107 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerDied","Data":"3c3d9e33877d35a402198be63a50621dbf8be27a97d9c8596143b4df8d2863cd"} Mar 08 03:10:42.676309 master-0 kubenswrapper[3991]: I0308 03:10:42.676170 3991 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z6mfs" Mar 08 03:10:42.676419 master-0 kubenswrapper[3991]: I0308 03:10:42.676170 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152"} Mar 08 03:10:42.751998 master-0 kubenswrapper[3991]: I0308 03:10:42.751558 3991 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6mfs"] Mar 08 03:10:42.758258 master-0 kubenswrapper[3991]: I0308 03:10:42.758192 3991 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z6mfs"] Mar 08 03:10:43.216822 master-0 kubenswrapper[3991]: I0308 03:10:43.216736 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:43.217115 master-0 kubenswrapper[3991]: E0308 03:10:43.216890 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:43.225363 master-0 kubenswrapper[3991]: I0308 03:10:43.225302 3991 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" path="/var/lib/kubelet/pods/18c148bd-0a23-46f1-b54e-6e8fd18825d5/volumes" Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.685858 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"57fbdce89ecca57ac22cc6a5e44417d231a239c4076b57580be622ed9408830b"} Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.685944 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"fdf5a8f06244fe6976f653f98c2fc74f4772c21f085dfca9715a428735c24884"} Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.685964 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"65d2048f6c80d069e3a4f165c1898b3239607437f63fc40f1b1ad7be01138516"} Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.685984 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"f4189587a472d2b938fa67f220a2d80874c9b2641fbc0f51f8866bbddd5b276c"} Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.686000 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"3de1c3b9da12818a1ed64df18f665f27cdcd2be056183baffeb0f6ced0f254ad"} Mar 08 03:10:43.685988 master-0 kubenswrapper[3991]: I0308 03:10:43.686014 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"480a92a0bc22eff45c66b422dbeeccf31df079ddbfe2da8345516fd8b3e6d58c"} Mar 08 03:10:44.217205 master-0 kubenswrapper[3991]: I0308 03:10:44.217091 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:44.217434 master-0 kubenswrapper[3991]: E0308 03:10:44.217270 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:45.217408 master-0 kubenswrapper[3991]: I0308 03:10:45.217301 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:45.218687 master-0 kubenswrapper[3991]: E0308 03:10:45.217510 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:45.990146 master-0 kubenswrapper[3991]: I0308 03:10:45.990006 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:10:45.990362 master-0 kubenswrapper[3991]: E0308 03:10:45.990303 3991 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:10:45.990444 master-0 kubenswrapper[3991]: E0308 03:10:45.990423 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:49.990392905 +0000 UTC m=+171.556330170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:10:46.217017 master-0 kubenswrapper[3991]: I0308 03:10:46.216889 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:46.217268 master-0 kubenswrapper[3991]: E0308 03:10:46.217073 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:46.705454 master-0 kubenswrapper[3991]: I0308 03:10:46.705357 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"0bbfffcf0ee60f8b368e84390bbdd452ccb5e9f3702eb9d2bcd07b5f52e39d5c"} Mar 08 03:10:47.102449 master-0 kubenswrapper[3991]: I0308 03:10:47.102362 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:47.102799 master-0 kubenswrapper[3991]: E0308 03:10:47.102643 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 03:10:47.102799 master-0 kubenswrapper[3991]: E0308 03:10:47.102684 3991 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 03:10:47.102799 master-0 kubenswrapper[3991]: E0308 03:10:47.102704 3991 projected.go:194] Error preparing data for projected volume kube-api-access-w2ng6 for pod openshift-network-diagnostics/network-check-target-4lx8s: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:47.103116 master-0 kubenswrapper[3991]: E0308 03:10:47.102805 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6 podName:0e59f2e1-7fbc-43b1-bc81-7ca5f058d774 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:19.102769398 +0000 UTC m=+140.668706693 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-w2ng6" (UniqueName: "kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6") pod "network-check-target-4lx8s" (UID: "0e59f2e1-7fbc-43b1-bc81-7ca5f058d774") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 03:10:47.217247 master-0 kubenswrapper[3991]: I0308 03:10:47.217153 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:47.217524 master-0 kubenswrapper[3991]: E0308 03:10:47.217338 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:48.216839 master-0 kubenswrapper[3991]: I0308 03:10:48.216781 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:48.217820 master-0 kubenswrapper[3991]: E0308 03:10:48.216953 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:48.723796 master-0 kubenswrapper[3991]: I0308 03:10:48.723673 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" event={"ID":"9d40fba7-84f0-46d7-9b49-dbba7aab20c5","Type":"ContainerStarted","Data":"104cf77e10a50fd9d6b8bf522538cafc0ff38230bfb9912fb4ebbd8c68eba396"} Mar 08 03:10:48.724782 master-0 kubenswrapper[3991]: I0308 03:10:48.724091 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:48.724782 master-0 kubenswrapper[3991]: I0308 03:10:48.724301 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:48.724782 master-0 kubenswrapper[3991]: I0308 03:10:48.724358 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:48.758749 master-0 kubenswrapper[3991]: I0308 03:10:48.758220 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:48.765407 master-0 kubenswrapper[3991]: I0308 03:10:48.765342 3991 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:10:48.770674 master-0 kubenswrapper[3991]: I0308 03:10:48.770574 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" podStartSLOduration=7.770546243 podStartE2EDuration="7.770546243s" podCreationTimestamp="2026-03-08 03:10:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:48.769168727 +0000 UTC m=+110.335106082" watchObservedRunningTime="2026-03-08 03:10:48.770546243 +0000 UTC m=+110.336483498" Mar 08 03:10:49.217224 master-0 kubenswrapper[3991]: I0308 03:10:49.217150 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:49.218441 master-0 kubenswrapper[3991]: E0308 03:10:49.217734 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:49.469175 master-0 kubenswrapper[3991]: I0308 03:10:49.462458 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-4lx8s"] Mar 08 03:10:49.469175 master-0 kubenswrapper[3991]: I0308 03:10:49.463565 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2l64n"] Mar 08 03:10:49.469175 master-0 kubenswrapper[3991]: I0308 03:10:49.464677 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:49.469175 master-0 kubenswrapper[3991]: E0308 03:10:49.464840 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:49.728301 master-0 kubenswrapper[3991]: I0308 03:10:49.728134 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:49.728528 master-0 kubenswrapper[3991]: E0308 03:10:49.728339 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:51.217178 master-0 kubenswrapper[3991]: I0308 03:10:51.217076 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:51.218181 master-0 kubenswrapper[3991]: I0308 03:10:51.217236 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:51.218181 master-0 kubenswrapper[3991]: E0308 03:10:51.217320 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:51.218181 master-0 kubenswrapper[3991]: E0308 03:10:51.217456 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:53.216837 master-0 kubenswrapper[3991]: I0308 03:10:53.216740 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:53.217843 master-0 kubenswrapper[3991]: I0308 03:10:53.216769 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:53.217843 master-0 kubenswrapper[3991]: E0308 03:10:53.216950 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-4lx8s" podUID="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" Mar 08 03:10:53.217843 master-0 kubenswrapper[3991]: E0308 03:10:53.217071 3991 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2l64n" podUID="f6ee6202-11e5-4586-ae46-075da1ad7f1a" Mar 08 03:10:53.798660 master-0 kubenswrapper[3991]: I0308 03:10:53.798616 3991 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 08 03:10:53.799125 master-0 kubenswrapper[3991]: I0308 03:10:53.799100 3991 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 08 03:10:53.844499 master-0 kubenswrapper[3991]: I0308 03:10:53.844380 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt"] Mar 08 03:10:53.845460 master-0 kubenswrapper[3991]: I0308 03:10:53.845391 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:53.846045 master-0 kubenswrapper[3991]: I0308 03:10:53.845985 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp"] Mar 08 03:10:53.846511 master-0 kubenswrapper[3991]: I0308 03:10:53.846460 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:10:53.847314 master-0 kubenswrapper[3991]: I0308 03:10:53.847215 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg"] Mar 08 03:10:53.848658 master-0 kubenswrapper[3991]: I0308 03:10:53.847779 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:53.851682 master-0 kubenswrapper[3991]: I0308 03:10:53.851604 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv"] Mar 08 03:10:53.861562 master-0 kubenswrapper[3991]: I0308 03:10:53.861416 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:53.862088 master-0 kubenswrapper[3991]: I0308 03:10:53.862030 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 03:10:53.862383 master-0 kubenswrapper[3991]: I0308 03:10:53.862348 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 03:10:53.862555 master-0 kubenswrapper[3991]: I0308 03:10:53.862490 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 03:10:53.862555 master-0 kubenswrapper[3991]: I0308 03:10:53.862551 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.862741 master-0 kubenswrapper[3991]: I0308 03:10:53.862706 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.862812 master-0 kubenswrapper[3991]: I0308 03:10:53.862717 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.864525 master-0 kubenswrapper[3991]: I0308 03:10:53.864463 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4bpl8"] Mar 08 03:10:53.868102 master-0 kubenswrapper[3991]: I0308 03:10:53.866136 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:53.868102 master-0 kubenswrapper[3991]: I0308 03:10:53.867390 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 03:10:53.868102 master-0 kubenswrapper[3991]: I0308 03:10:53.867680 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 03:10:53.868102 master-0 kubenswrapper[3991]: I0308 03:10:53.867416 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 03:10:53.871943 master-0 kubenswrapper[3991]: I0308 03:10:53.868817 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 03:10:53.878173 master-0 kubenswrapper[3991]: I0308 03:10:53.878120 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-9mhwc"] Mar 08 03:10:53.898052 master-0 kubenswrapper[3991]: I0308 03:10:53.897457 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 03:10:53.898052 master-0 kubenswrapper[3991]: I0308 03:10:53.898042 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 03:10:53.900312 master-0 kubenswrapper[3991]: I0308 03:10:53.898437 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.900312 master-0 kubenswrapper[3991]: I0308 03:10:53.899715 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4"] Mar 08 03:10:53.900312 master-0 kubenswrapper[3991]: I0308 03:10:53.899935 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:10:53.900312 master-0 kubenswrapper[3991]: I0308 03:10:53.900126 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 03:10:53.900312 master-0 kubenswrapper[3991]: I0308 03:10:53.900266 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:53.900577 master-0 kubenswrapper[3991]: I0308 03:10:53.900385 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:53.900647 master-0 kubenswrapper[3991]: I0308 03:10:53.900631 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:53.902570 master-0 kubenswrapper[3991]: I0308 03:10:53.902181 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.902570 master-0 kubenswrapper[3991]: I0308 03:10:53.902409 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 03:10:53.907223 master-0 kubenswrapper[3991]: I0308 03:10:53.907135 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 03:10:53.907400 master-0 kubenswrapper[3991]: I0308 03:10:53.907362 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.907959 master-0 kubenswrapper[3991]: I0308 03:10:53.907566 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 03:10:53.907959 master-0 kubenswrapper[3991]: I0308 03:10:53.907735 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.907959 master-0 kubenswrapper[3991]: I0308 03:10:53.907872 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 03:10:53.908338 master-0 kubenswrapper[3991]: I0308 03:10:53.908299 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 03:10:53.908622 master-0 kubenswrapper[3991]: I0308 03:10:53.908580 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.921387 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr"] Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.921752 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7"] Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.921770 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.922219 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.922662 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.922726 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 03:10:53.924089 master-0 kubenswrapper[3991]: I0308 03:10:53.922899 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 03:10:53.925606 master-0 kubenswrapper[3991]: I0308 03:10:53.925550 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 03:10:53.929941 master-0 kubenswrapper[3991]: I0308 03:10:53.928221 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq"] Mar 08 03:10:53.929941 master-0 kubenswrapper[3991]: I0308 03:10:53.928730 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr"] Mar 08 03:10:53.929941 master-0 kubenswrapper[3991]: I0308 03:10:53.929155 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:53.929941 master-0 kubenswrapper[3991]: I0308 03:10:53.929391 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w"] Mar 08 03:10:53.929941 master-0 kubenswrapper[3991]: I0308 03:10:53.929715 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.940140 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.940363 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.940377 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8"] Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.940697 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.941069 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.941531 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.941543 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.941686 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.941928 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.942091 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.942119 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.942315 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf"] Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.943204 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.944418 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.944518 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.944696 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.944844 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.946350 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.946386 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.946815 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.946880 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.947397 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.947498 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.947628 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:10:53.951487 master-0 kubenswrapper[3991]: I0308 03:10:53.951384 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf"] Mar 08 03:10:53.953187 master-0 kubenswrapper[3991]: I0308 03:10:53.953145 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:53.953270 master-0 kubenswrapper[3991]: I0308 03:10:53.953206 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 03:10:53.955741 master-0 kubenswrapper[3991]: I0308 03:10:53.955698 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll"] Mar 08 03:10:53.955953 master-0 kubenswrapper[3991]: I0308 03:10:53.955938 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx"] Mar 08 03:10:53.956140 master-0 kubenswrapper[3991]: I0308 03:10:53.956125 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n"] Mar 08 03:10:53.956348 master-0 kubenswrapper[3991]: I0308 03:10:53.956294 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:53.956348 master-0 kubenswrapper[3991]: I0308 03:10:53.956332 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6"] Mar 08 03:10:53.956567 master-0 kubenswrapper[3991]: I0308 03:10:53.956551 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx"] Mar 08 03:10:53.956831 master-0 kubenswrapper[3991]: I0308 03:10:53.956789 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:53.956990 master-0 kubenswrapper[3991]: I0308 03:10:53.956967 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:53.957166 master-0 kubenswrapper[3991]: I0308 03:10:53.957151 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:53.957251 master-0 kubenswrapper[3991]: I0308 03:10:53.957188 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:53.958321 master-0 kubenswrapper[3991]: I0308 03:10:53.957445 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:53.958321 master-0 kubenswrapper[3991]: I0308 03:10:53.958012 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw"] Mar 08 03:10:53.958946 master-0 kubenswrapper[3991]: I0308 03:10:53.958713 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:53.959505 master-0 kubenswrapper[3991]: I0308 03:10:53.959367 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 03:10:53.959505 master-0 kubenswrapper[3991]: I0308 03:10:53.959471 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 03:10:53.960514 master-0 kubenswrapper[3991]: I0308 03:10:53.960226 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:10:53.960514 master-0 kubenswrapper[3991]: I0308 03:10:53.960349 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 03:10:53.961663 master-0 kubenswrapper[3991]: I0308 03:10:53.961623 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962147 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962328 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962349 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962455 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962534 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962543 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 03:10:53.962726 master-0 kubenswrapper[3991]: I0308 03:10:53.962643 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.963046 master-0 kubenswrapper[3991]: I0308 03:10:53.962819 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 03:10:53.963200 master-0 kubenswrapper[3991]: I0308 03:10:53.963134 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 03:10:53.963200 master-0 kubenswrapper[3991]: I0308 03:10:53.963195 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 03:10:53.963285 master-0 kubenswrapper[3991]: I0308 03:10:53.963263 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 03:10:53.964428 master-0 kubenswrapper[3991]: I0308 03:10:53.963315 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 03:10:53.964428 master-0 kubenswrapper[3991]: I0308 03:10:53.963363 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 03:10:53.964428 master-0 kubenswrapper[3991]: I0308 03:10:53.964063 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 03:10:53.964428 master-0 kubenswrapper[3991]: I0308 03:10:53.964282 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 03:10:53.964428 master-0 kubenswrapper[3991]: I0308 03:10:53.964357 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 03:10:53.964646 master-0 kubenswrapper[3991]: I0308 03:10:53.964569 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 03:10:53.964646 master-0 kubenswrapper[3991]: I0308 03:10:53.964604 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 03:10:53.964757 master-0 kubenswrapper[3991]: I0308 03:10:53.964735 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 03:10:53.964945 master-0 kubenswrapper[3991]: I0308 03:10:53.964895 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 03:10:53.965168 master-0 kubenswrapper[3991]: I0308 03:10:53.965044 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 03:10:53.965168 master-0 kubenswrapper[3991]: I0308 03:10:53.965040 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt"] Mar 08 03:10:53.965168 master-0 kubenswrapper[3991]: I0308 03:10:53.964921 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 03:10:53.966026 master-0 kubenswrapper[3991]: I0308 03:10:53.965995 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp"] Mar 08 03:10:53.966614 master-0 kubenswrapper[3991]: I0308 03:10:53.966589 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg"] Mar 08 03:10:53.969194 master-0 kubenswrapper[3991]: I0308 03:10:53.969119 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 03:10:53.970342 master-0 kubenswrapper[3991]: I0308 03:10:53.970305 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-9mhwc"] Mar 08 03:10:53.970583 master-0 kubenswrapper[3991]: I0308 03:10:53.970555 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4bpl8"] Mar 08 03:10:53.971709 master-0 kubenswrapper[3991]: I0308 03:10:53.971674 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4"] Mar 08 03:10:53.974288 master-0 kubenswrapper[3991]: I0308 03:10:53.974257 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv"] Mar 08 03:10:53.976508 master-0 kubenswrapper[3991]: I0308 03:10:53.976478 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr"] Mar 08 03:10:53.977431 master-0 kubenswrapper[3991]: I0308 03:10:53.977389 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8"] Mar 08 03:10:53.978267 master-0 kubenswrapper[3991]: I0308 03:10:53.978244 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7"] Mar 08 03:10:53.980084 master-0 kubenswrapper[3991]: I0308 03:10:53.980028 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf"] Mar 08 03:10:53.981006 master-0 kubenswrapper[3991]: I0308 03:10:53.980974 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq"] Mar 08 03:10:53.982000 master-0 kubenswrapper[3991]: I0308 03:10:53.981968 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll"] Mar 08 03:10:53.985832 master-0 kubenswrapper[3991]: I0308 03:10:53.985804 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6"] Mar 08 03:10:53.991489 master-0 kubenswrapper[3991]: I0308 03:10:53.990967 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:10:53.991851 master-0 kubenswrapper[3991]: I0308 03:10:53.991818 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr"] Mar 08 03:10:53.992484 master-0 kubenswrapper[3991]: I0308 03:10:53.992461 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w"] Mar 08 03:10:53.993436 master-0 kubenswrapper[3991]: I0308 03:10:53.993184 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx"] Mar 08 03:10:53.997327 master-0 kubenswrapper[3991]: I0308 03:10:53.997302 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n"] Mar 08 03:10:53.999967 master-0 kubenswrapper[3991]: I0308 03:10:53.999936 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx"] Mar 08 03:10:54.000550 master-0 kubenswrapper[3991]: I0308 03:10:54.000502 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf"] Mar 08 03:10:54.000723 master-0 kubenswrapper[3991]: I0308 03:10:54.000698 3991 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-fpxrc"] Mar 08 03:10:54.001328 master-0 kubenswrapper[3991]: I0308 03:10:54.001300 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.003154 master-0 kubenswrapper[3991]: I0308 03:10:54.003073 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 03:10:54.004235 master-0 kubenswrapper[3991]: I0308 03:10:54.004210 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:10:54.004364 master-0 kubenswrapper[3991]: I0308 03:10:54.004335 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.004486 master-0 kubenswrapper[3991]: I0308 03:10:54.004470 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.004580 master-0 kubenswrapper[3991]: I0308 03:10:54.004567 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.004710 master-0 kubenswrapper[3991]: I0308 03:10:54.004694 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.004786 master-0 kubenswrapper[3991]: I0308 03:10:54.004570 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw"] Mar 08 03:10:54.004962 master-0 kubenswrapper[3991]: I0308 03:10:54.004873 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.005098 master-0 kubenswrapper[3991]: I0308 03:10:54.004997 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.005098 master-0 kubenswrapper[3991]: I0308 03:10:54.005063 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.005176 master-0 kubenswrapper[3991]: I0308 03:10:54.005130 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.005229 master-0 kubenswrapper[3991]: I0308 03:10:54.005201 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.005322 master-0 kubenswrapper[3991]: I0308 03:10:54.005294 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.005472 master-0 kubenswrapper[3991]: I0308 03:10:54.005458 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.005584 master-0 kubenswrapper[3991]: I0308 03:10:54.005570 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.005751 master-0 kubenswrapper[3991]: I0308 03:10:54.005733 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.005857 master-0 kubenswrapper[3991]: I0308 03:10:54.005828 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.006076 master-0 kubenswrapper[3991]: I0308 03:10:54.006059 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.006204 master-0 kubenswrapper[3991]: I0308 03:10:54.006191 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.006303 master-0 kubenswrapper[3991]: I0308 03:10:54.006289 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.006373 master-0 kubenswrapper[3991]: I0308 03:10:54.006361 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.006457 master-0 kubenswrapper[3991]: I0308 03:10:54.006445 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.006544 master-0 kubenswrapper[3991]: I0308 03:10:54.006528 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.006962 master-0 kubenswrapper[3991]: I0308 03:10:54.006945 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.007195 master-0 kubenswrapper[3991]: I0308 03:10:54.007140 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.007345 master-0 kubenswrapper[3991]: I0308 03:10:54.007315 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.107929 master-0 kubenswrapper[3991]: I0308 03:10:54.107857 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.107929 master-0 kubenswrapper[3991]: I0308 03:10:54.107917 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.108092 master-0 kubenswrapper[3991]: I0308 03:10:54.108059 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.108125 master-0 kubenswrapper[3991]: I0308 03:10:54.108112 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.108151 master-0 kubenswrapper[3991]: I0308 03:10:54.108134 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.108186 master-0 kubenswrapper[3991]: E0308 03:10:54.108150 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:54.108186 master-0 kubenswrapper[3991]: I0308 03:10:54.108165 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.108239 master-0 kubenswrapper[3991]: E0308 03:10:54.108192 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.608178333 +0000 UTC m=+116.174115558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:10:54.108359 master-0 kubenswrapper[3991]: I0308 03:10:54.108308 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.108480 master-0 kubenswrapper[3991]: I0308 03:10:54.108449 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.108523 master-0 kubenswrapper[3991]: I0308 03:10:54.108481 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.108523 master-0 kubenswrapper[3991]: I0308 03:10:54.108502 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.108577 master-0 kubenswrapper[3991]: I0308 03:10:54.108526 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.108577 master-0 kubenswrapper[3991]: I0308 03:10:54.108546 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.108657 master-0 kubenswrapper[3991]: I0308 03:10:54.108587 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.109247 master-0 kubenswrapper[3991]: I0308 03:10:54.109187 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.109247 master-0 kubenswrapper[3991]: I0308 03:10:54.109240 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.109344 master-0 kubenswrapper[3991]: I0308 03:10:54.109260 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.109388 master-0 kubenswrapper[3991]: I0308 03:10:54.109297 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.109441 master-0 kubenswrapper[3991]: I0308 03:10:54.109398 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:10:54.109486 master-0 kubenswrapper[3991]: I0308 03:10:54.109464 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.109530 master-0 kubenswrapper[3991]: I0308 03:10:54.109483 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.109577 master-0 kubenswrapper[3991]: I0308 03:10:54.109534 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.109701 master-0 kubenswrapper[3991]: I0308 03:10:54.109621 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.109768 master-0 kubenswrapper[3991]: I0308 03:10:54.109741 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.109812 master-0 kubenswrapper[3991]: I0308 03:10:54.109785 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.109850 master-0 kubenswrapper[3991]: I0308 03:10:54.109813 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.109850 master-0 kubenswrapper[3991]: I0308 03:10:54.109842 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.109975 master-0 kubenswrapper[3991]: I0308 03:10:54.109942 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.110124 master-0 kubenswrapper[3991]: I0308 03:10:54.110072 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.110190 master-0 kubenswrapper[3991]: I0308 03:10:54.110145 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.110237 master-0 kubenswrapper[3991]: I0308 03:10:54.110187 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.110237 master-0 kubenswrapper[3991]: I0308 03:10:54.110226 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.110318 master-0 kubenswrapper[3991]: I0308 03:10:54.110269 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.110318 master-0 kubenswrapper[3991]: I0308 03:10:54.110298 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.110468 master-0 kubenswrapper[3991]: I0308 03:10:54.110436 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.110524 master-0 kubenswrapper[3991]: I0308 03:10:54.110484 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.110573 master-0 kubenswrapper[3991]: I0308 03:10:54.110527 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.110573 master-0 kubenswrapper[3991]: I0308 03:10:54.110558 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.110649 master-0 kubenswrapper[3991]: I0308 03:10:54.110585 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.110649 master-0 kubenswrapper[3991]: I0308 03:10:54.110613 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.110649 master-0 kubenswrapper[3991]: I0308 03:10:54.110640 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.110750 master-0 kubenswrapper[3991]: I0308 03:10:54.110666 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.110750 master-0 kubenswrapper[3991]: I0308 03:10:54.110694 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.110750 master-0 kubenswrapper[3991]: I0308 03:10:54.110723 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.110750 master-0 kubenswrapper[3991]: I0308 03:10:54.110747 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.110889 master-0 kubenswrapper[3991]: I0308 03:10:54.110772 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.110889 master-0 kubenswrapper[3991]: I0308 03:10:54.110802 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.110889 master-0 kubenswrapper[3991]: I0308 03:10:54.110826 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.110889 master-0 kubenswrapper[3991]: I0308 03:10:54.110858 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.110889 master-0 kubenswrapper[3991]: I0308 03:10:54.110879 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.110920 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.110946 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.110968 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: E0308 03:10:54.110975 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111026 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: E0308 03:10:54.111030 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.611019008 +0000 UTC m=+116.176956233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111090 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111118 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111138 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111180 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111201 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111220 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111260 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.111288 master-0 kubenswrapper[3991]: I0308 03:10:54.111284 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111324 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111349 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111367 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111408 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111430 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111446 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111468 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111464 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111496 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: I0308 03:10:54.111503 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: E0308 03:10:54.111602 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: E0308 03:10:54.111612 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: E0308 03:10:54.111672 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.611653805 +0000 UTC m=+116.177591030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:10:54.112786 master-0 kubenswrapper[3991]: E0308 03:10:54.111690 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.611680865 +0000 UTC m=+116.177618220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: I0308 03:10:54.111735 3991 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: I0308 03:10:54.112196 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: E0308 03:10:54.112243 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: E0308 03:10:54.112289 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.612276111 +0000 UTC m=+116.178213456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: I0308 03:10:54.112294 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: I0308 03:10:54.112612 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.113307 master-0 kubenswrapper[3991]: I0308 03:10:54.113223 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.114883 master-0 kubenswrapper[3991]: I0308 03:10:54.114844 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.116545 master-0 kubenswrapper[3991]: I0308 03:10:54.116510 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.128503 master-0 kubenswrapper[3991]: I0308 03:10:54.127360 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.128503 master-0 kubenswrapper[3991]: I0308 03:10:54.127610 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.131633 master-0 kubenswrapper[3991]: I0308 03:10:54.131586 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.132158 master-0 kubenswrapper[3991]: I0308 03:10:54.131652 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.132845 master-0 kubenswrapper[3991]: I0308 03:10:54.132810 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.133075 master-0 kubenswrapper[3991]: I0308 03:10:54.133016 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.133385 master-0 kubenswrapper[3991]: I0308 03:10:54.133355 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.133938 master-0 kubenswrapper[3991]: I0308 03:10:54.133886 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.134558 master-0 kubenswrapper[3991]: I0308 03:10:54.134505 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:10:54.135285 master-0 kubenswrapper[3991]: I0308 03:10:54.135250 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.147994 master-0 kubenswrapper[3991]: I0308 03:10:54.146805 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.213892 master-0 kubenswrapper[3991]: I0308 03:10:54.213792 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.214132 master-0 kubenswrapper[3991]: I0308 03:10:54.213895 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.214132 master-0 kubenswrapper[3991]: I0308 03:10:54.213983 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.214132 master-0 kubenswrapper[3991]: E0308 03:10:54.214019 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:54.214132 master-0 kubenswrapper[3991]: E0308 03:10:54.214110 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.714076901 +0000 UTC m=+116.280014216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:54.214440 master-0 kubenswrapper[3991]: I0308 03:10:54.214035 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.214440 master-0 kubenswrapper[3991]: I0308 03:10:54.214408 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.214632 master-0 kubenswrapper[3991]: I0308 03:10:54.214457 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.214632 master-0 kubenswrapper[3991]: E0308 03:10:54.214606 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:10:54.214753 master-0 kubenswrapper[3991]: I0308 03:10:54.214643 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:10:54.214753 master-0 kubenswrapper[3991]: E0308 03:10:54.214709 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.714675667 +0000 UTC m=+116.280612932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:10:54.214877 master-0 kubenswrapper[3991]: I0308 03:10:54.214771 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.214877 master-0 kubenswrapper[3991]: I0308 03:10:54.214834 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.215048 master-0 kubenswrapper[3991]: I0308 03:10:54.214889 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.215048 master-0 kubenswrapper[3991]: I0308 03:10:54.214984 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.215048 master-0 kubenswrapper[3991]: I0308 03:10:54.215017 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.215048 master-0 kubenswrapper[3991]: I0308 03:10:54.215038 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.215273 master-0 kubenswrapper[3991]: I0308 03:10:54.215090 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.215273 master-0 kubenswrapper[3991]: I0308 03:10:54.215139 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.215273 master-0 kubenswrapper[3991]: I0308 03:10:54.215213 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.215273 master-0 kubenswrapper[3991]: I0308 03:10:54.215265 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.225240 master-0 kubenswrapper[3991]: I0308 03:10:54.225176 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.225240 master-0 kubenswrapper[3991]: I0308 03:10:54.225224 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225254 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225411 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225464 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225533 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225758 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.225539 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.226260 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.226419 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.226590 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.226716 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.226947 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.227076 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.227169 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.227255 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.228565 master-0 kubenswrapper[3991]: I0308 03:10:54.227343 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.227846 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.228075 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.228222 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.228650 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.229353 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.229270 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230336 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: E0308 03:10:54.230536 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230601 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: E0308 03:10:54.230692 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.730655127 +0000 UTC m=+116.296592382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230742 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230756 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230866 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.230983 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: E0308 03:10:54.231026 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:10:54.231634 master-0 kubenswrapper[3991]: I0308 03:10:54.231050 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231103 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: E0308 03:10:54.231144 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.731095649 +0000 UTC m=+116.297032914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231190 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: E0308 03:10:54.231243 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231267 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: E0308 03:10:54.231321 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.731285984 +0000 UTC m=+116.297223229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231352 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231358 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231405 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231368 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231657 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.231853 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.232014 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.232227 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.232841 master-0 kubenswrapper[3991]: I0308 03:10:54.232306 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.233965 master-0 kubenswrapper[3991]: I0308 03:10:54.232378 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.233965 master-0 kubenswrapper[3991]: E0308 03:10:54.232701 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:10:54.233965 master-0 kubenswrapper[3991]: E0308 03:10:54.232822 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:54.732790134 +0000 UTC m=+116.298727409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:10:54.233965 master-0 kubenswrapper[3991]: I0308 03:10:54.233702 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.234650 master-0 kubenswrapper[3991]: I0308 03:10:54.234603 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.235428 master-0 kubenswrapper[3991]: I0308 03:10:54.235380 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.235548 master-0 kubenswrapper[3991]: I0308 03:10:54.235385 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.235548 master-0 kubenswrapper[3991]: I0308 03:10:54.235465 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.240105 master-0 kubenswrapper[3991]: I0308 03:10:54.239418 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.243106 master-0 kubenswrapper[3991]: I0308 03:10:54.240759 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.243106 master-0 kubenswrapper[3991]: I0308 03:10:54.240835 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.243106 master-0 kubenswrapper[3991]: I0308 03:10:54.240850 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.243355 master-0 kubenswrapper[3991]: I0308 03:10:54.243258 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:10:54.244518 master-0 kubenswrapper[3991]: I0308 03:10:54.244435 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.244857 master-0 kubenswrapper[3991]: I0308 03:10:54.244802 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.247703 master-0 kubenswrapper[3991]: I0308 03:10:54.247590 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.258980 master-0 kubenswrapper[3991]: I0308 03:10:54.258936 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.262716 master-0 kubenswrapper[3991]: I0308 03:10:54.262421 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:10:54.267958 master-0 kubenswrapper[3991]: I0308 03:10:54.267893 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.287810 master-0 kubenswrapper[3991]: I0308 03:10:54.287549 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.317233 master-0 kubenswrapper[3991]: I0308 03:10:54.317149 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.328500 master-0 kubenswrapper[3991]: I0308 03:10:54.327830 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.347646 master-0 kubenswrapper[3991]: I0308 03:10:54.347587 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:10:54.354877 master-0 kubenswrapper[3991]: I0308 03:10:54.354837 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:10:54.362554 master-0 kubenswrapper[3991]: I0308 03:10:54.361435 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.375004 master-0 kubenswrapper[3991]: I0308 03:10:54.374594 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.386698 master-0 kubenswrapper[3991]: I0308 03:10:54.386660 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:10:54.392701 master-0 kubenswrapper[3991]: I0308 03:10:54.390741 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.412840 master-0 kubenswrapper[3991]: I0308 03:10:54.412210 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.414648 master-0 kubenswrapper[3991]: I0308 03:10:54.414337 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:10:54.475116 master-0 kubenswrapper[3991]: I0308 03:10:54.471086 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.479738 master-0 kubenswrapper[3991]: I0308 03:10:54.479665 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.494201 master-0 kubenswrapper[3991]: I0308 03:10:54.494157 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.502820 master-0 kubenswrapper[3991]: I0308 03:10:54.502756 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.525326 master-0 kubenswrapper[3991]: I0308 03:10:54.525268 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt"] Mar 08 03:10:54.529374 master-0 kubenswrapper[3991]: I0308 03:10:54.529323 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.547431 master-0 kubenswrapper[3991]: I0308 03:10:54.546665 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp"] Mar 08 03:10:54.554241 master-0 kubenswrapper[3991]: W0308 03:10:54.550440 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4711e21f_da6d_47ee_8722_64663e05de10.slice/crio-b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5 WatchSource:0}: Error finding container b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5: Status 404 returned error can't find the container with id b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5 Mar 08 03:10:54.562305 master-0 kubenswrapper[3991]: I0308 03:10:54.562267 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.562431 master-0 kubenswrapper[3991]: I0308 03:10:54.562376 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg"] Mar 08 03:10:54.564003 master-0 kubenswrapper[3991]: W0308 03:10:54.563966 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d69f101_60a8_41fd_bcda_4eb654c626a2.slice/crio-1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a WatchSource:0}: Error finding container 1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a: Status 404 returned error can't find the container with id 1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a Mar 08 03:10:54.568028 master-0 kubenswrapper[3991]: I0308 03:10:54.567983 3991 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.568288 master-0 kubenswrapper[3991]: I0308 03:10:54.568253 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:10:54.595290 master-0 kubenswrapper[3991]: I0308 03:10:54.595247 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv"] Mar 08 03:10:54.612859 master-0 kubenswrapper[3991]: I0308 03:10:54.612747 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:10:54.623111 master-0 kubenswrapper[3991]: I0308 03:10:54.623077 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:10:54.630562 master-0 kubenswrapper[3991]: I0308 03:10:54.630413 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:10:54.637838 master-0 kubenswrapper[3991]: I0308 03:10:54.637794 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:54.637936 master-0 kubenswrapper[3991]: I0308 03:10:54.637918 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:54.638016 master-0 kubenswrapper[3991]: I0308 03:10:54.637987 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.638050 master-0 kubenswrapper[3991]: I0308 03:10:54.638023 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:54.638103 master-0 kubenswrapper[3991]: I0308 03:10:54.638049 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:54.638203 master-0 kubenswrapper[3991]: E0308 03:10:54.638178 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:54.638273 master-0 kubenswrapper[3991]: E0308 03:10:54.638246 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.638226156 +0000 UTC m=+117.204163381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:10:54.638661 master-0 kubenswrapper[3991]: E0308 03:10:54.638627 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:54.638753 master-0 kubenswrapper[3991]: E0308 03:10:54.638669 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.638659128 +0000 UTC m=+117.204596353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:10:54.640225 master-0 kubenswrapper[3991]: E0308 03:10:54.639942 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:10:54.640225 master-0 kubenswrapper[3991]: E0308 03:10:54.640026 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.640014563 +0000 UTC m=+117.205951788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:10:54.640225 master-0 kubenswrapper[3991]: E0308 03:10:54.640128 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:54.640225 master-0 kubenswrapper[3991]: E0308 03:10:54.640187 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.640168327 +0000 UTC m=+117.206105552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:54.640225 master-0 kubenswrapper[3991]: E0308 03:10:54.640231 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:10:54.640545 master-0 kubenswrapper[3991]: E0308 03:10:54.640258 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.64025024 +0000 UTC m=+117.206187465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:10:54.641954 master-0 kubenswrapper[3991]: I0308 03:10:54.641501 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w"] Mar 08 03:10:54.660722 master-0 kubenswrapper[3991]: W0308 03:10:54.652443 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0722d9c3_77b8_4770_9171_d4aeba4b0cc7.slice/crio-c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7 WatchSource:0}: Error finding container c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7: Status 404 returned error can't find the container with id c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7 Mar 08 03:10:54.660722 master-0 kubenswrapper[3991]: I0308 03:10:54.657523 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf"] Mar 08 03:10:54.663129 master-0 kubenswrapper[3991]: I0308 03:10:54.663080 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6"] Mar 08 03:10:54.678602 master-0 kubenswrapper[3991]: W0308 03:10:54.678551 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fa64f1b_9f10_488b_8f94_1600774062c4.slice/crio-975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6 WatchSource:0}: Error finding container 975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6: Status 404 returned error can't find the container with id 975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6 Mar 08 03:10:54.684739 master-0 kubenswrapper[3991]: W0308 03:10:54.679794 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a058138_8039_4841_821b_7ee5bb8648e4.slice/crio-0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77 WatchSource:0}: Error finding container 0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77: Status 404 returned error can't find the container with id 0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77 Mar 08 03:10:54.699165 master-0 kubenswrapper[3991]: I0308 03:10:54.698619 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:10:54.740820 master-0 kubenswrapper[3991]: I0308 03:10:54.740762 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:54.740971 master-0 kubenswrapper[3991]: I0308 03:10:54.740848 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:54.740971 master-0 kubenswrapper[3991]: I0308 03:10:54.740874 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:54.741108 master-0 kubenswrapper[3991]: E0308 03:10:54.741091 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:10:54.741160 master-0 kubenswrapper[3991]: E0308 03:10:54.741104 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:10:54.741160 master-0 kubenswrapper[3991]: E0308 03:10:54.741139 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741126355 +0000 UTC m=+117.307063580 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:10:54.741160 master-0 kubenswrapper[3991]: E0308 03:10:54.741153 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741147306 +0000 UTC m=+117.307084531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:10:54.741344 master-0 kubenswrapper[3991]: E0308 03:10:54.741330 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:10:54.741384 master-0 kubenswrapper[3991]: E0308 03:10:54.741355 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741348761 +0000 UTC m=+117.307285986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:10:54.741433 master-0 kubenswrapper[3991]: I0308 03:10:54.741382 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:54.741433 master-0 kubenswrapper[3991]: I0308 03:10:54.741411 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:54.741498 master-0 kubenswrapper[3991]: I0308 03:10:54.741437 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:54.741498 master-0 kubenswrapper[3991]: E0308 03:10:54.741489 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:54.741557 master-0 kubenswrapper[3991]: E0308 03:10:54.741507 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741501615 +0000 UTC m=+117.307438840 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:54.741557 master-0 kubenswrapper[3991]: E0308 03:10:54.741538 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:10:54.741557 master-0 kubenswrapper[3991]: E0308 03:10:54.741554 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741549376 +0000 UTC m=+117.307486601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:10:54.741655 master-0 kubenswrapper[3991]: E0308 03:10:54.741582 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:10:54.741655 master-0 kubenswrapper[3991]: E0308 03:10:54.741598 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:55.741592748 +0000 UTC m=+117.307529973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:10:54.763213 master-0 kubenswrapper[3991]: I0308 03:10:54.763159 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerStarted","Data":"b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5"} Mar 08 03:10:54.765343 master-0 kubenswrapper[3991]: I0308 03:10:54.765318 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-fpxrc" event={"ID":"aadf7b67-db33-4392-81f5-1b93eef54545","Type":"ContainerStarted","Data":"e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd"} Mar 08 03:10:54.766657 master-0 kubenswrapper[3991]: I0308 03:10:54.766608 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerStarted","Data":"c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7"} Mar 08 03:10:54.767945 master-0 kubenswrapper[3991]: I0308 03:10:54.767922 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77"} Mar 08 03:10:54.768872 master-0 kubenswrapper[3991]: I0308 03:10:54.768834 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerStarted","Data":"975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6"} Mar 08 03:10:54.771109 master-0 kubenswrapper[3991]: I0308 03:10:54.771050 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd"} Mar 08 03:10:54.771217 master-0 kubenswrapper[3991]: I0308 03:10:54.771186 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8"] Mar 08 03:10:54.773019 master-0 kubenswrapper[3991]: I0308 03:10:54.772981 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1"} Mar 08 03:10:54.779227 master-0 kubenswrapper[3991]: I0308 03:10:54.779172 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerStarted","Data":"1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a"} Mar 08 03:10:54.791249 master-0 kubenswrapper[3991]: I0308 03:10:54.791171 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7"] Mar 08 03:10:54.818285 master-0 kubenswrapper[3991]: W0308 03:10:54.818235 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d446527_f3fd_4a37_a980_7445031928d1.slice/crio-17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d WatchSource:0}: Error finding container 17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d: Status 404 returned error can't find the container with id 17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d Mar 08 03:10:54.879037 master-0 kubenswrapper[3991]: I0308 03:10:54.878932 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll"] Mar 08 03:10:54.899402 master-0 kubenswrapper[3991]: W0308 03:10:54.899357 3991 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e4afd0_fbcd_49c7_9132_b54c9c28b74b.slice/crio-d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3 WatchSource:0}: Error finding container d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3: Status 404 returned error can't find the container with id d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3 Mar 08 03:10:55.042405 master-0 kubenswrapper[3991]: I0308 03:10:55.042351 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr"] Mar 08 03:10:55.046383 master-0 kubenswrapper[3991]: I0308 03:10:55.046353 3991 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr"] Mar 08 03:10:55.216630 master-0 kubenswrapper[3991]: I0308 03:10:55.216388 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:10:55.216630 master-0 kubenswrapper[3991]: I0308 03:10:55.216409 3991 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:10:55.220324 master-0 kubenswrapper[3991]: I0308 03:10:55.219217 3991 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 03:10:55.220324 master-0 kubenswrapper[3991]: I0308 03:10:55.219445 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 03:10:55.220324 master-0 kubenswrapper[3991]: I0308 03:10:55.219510 3991 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 03:10:55.652860 master-0 kubenswrapper[3991]: I0308 03:10:55.652807 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: I0308 03:10:55.652892 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: I0308 03:10:55.652944 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: I0308 03:10:55.652967 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: I0308 03:10:55.653023 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653270 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653328 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.65331085 +0000 UTC m=+119.219248075 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653376 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653479 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.653456314 +0000 UTC m=+119.219393549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653387 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653533 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.653519925 +0000 UTC m=+119.219457150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653534 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653624 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.653604588 +0000 UTC m=+119.219541803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653771 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:55.654098 master-0 kubenswrapper[3991]: E0308 03:10:55.653825 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.653802483 +0000 UTC m=+119.219739828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:10:55.753982 master-0 kubenswrapper[3991]: I0308 03:10:55.753934 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:55.754160 master-0 kubenswrapper[3991]: I0308 03:10:55.754016 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:55.754160 master-0 kubenswrapper[3991]: I0308 03:10:55.754070 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:55.754160 master-0 kubenswrapper[3991]: I0308 03:10:55.754101 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:55.754160 master-0 kubenswrapper[3991]: I0308 03:10:55.754126 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:55.754280 master-0 kubenswrapper[3991]: I0308 03:10:55.754203 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:55.754355 master-0 kubenswrapper[3991]: E0308 03:10:55.754332 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:10:55.754399 master-0 kubenswrapper[3991]: E0308 03:10:55.754386 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.75437306 +0000 UTC m=+119.320310285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:10:55.754735 master-0 kubenswrapper[3991]: E0308 03:10:55.754705 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:10:55.754773 master-0 kubenswrapper[3991]: E0308 03:10:55.754742 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.75473193 +0000 UTC m=+119.320669155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:10:55.754817 master-0 kubenswrapper[3991]: E0308 03:10:55.754788 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:10:55.754817 master-0 kubenswrapper[3991]: E0308 03:10:55.754814 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.754805222 +0000 UTC m=+119.320742447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:10:55.754898 master-0 kubenswrapper[3991]: E0308 03:10:55.754849 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:10:55.754898 master-0 kubenswrapper[3991]: E0308 03:10:55.754867 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.754860393 +0000 UTC m=+119.320797618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:10:55.754974 master-0 kubenswrapper[3991]: E0308 03:10:55.754895 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:55.754974 master-0 kubenswrapper[3991]: E0308 03:10:55.754930 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.754924745 +0000 UTC m=+119.320861970 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:55.754974 master-0 kubenswrapper[3991]: E0308 03:10:55.754963 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:10:55.755057 master-0 kubenswrapper[3991]: E0308 03:10:55.754979 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:10:57.754974226 +0000 UTC m=+119.320911451 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:10:55.785181 master-0 kubenswrapper[3991]: I0308 03:10:55.785141 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c"} Mar 08 03:10:55.788642 master-0 kubenswrapper[3991]: I0308 03:10:55.787133 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"0ece4a43051b1635cbb843e7e2b46319cb5de6a10e2de8626c1fb83227bc0d72"} Mar 08 03:10:55.790274 master-0 kubenswrapper[3991]: I0308 03:10:55.790251 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3"} Mar 08 03:10:55.793548 master-0 kubenswrapper[3991]: I0308 03:10:55.793522 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96"} Mar 08 03:10:55.795766 master-0 kubenswrapper[3991]: I0308 03:10:55.795737 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d"} Mar 08 03:10:55.799769 master-0 kubenswrapper[3991]: I0308 03:10:55.799521 3991 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6"} Mar 08 03:10:57.676397 master-0 kubenswrapper[3991]: I0308 03:10:57.676270 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:10:57.676397 master-0 kubenswrapper[3991]: I0308 03:10:57.676364 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:57.676397 master-0 kubenswrapper[3991]: I0308 03:10:57.676406 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: I0308 03:10:57.676435 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: I0308 03:10:57.676481 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.676600 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.676643 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.676628516 +0000 UTC m=+123.242565741 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.676975 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677000 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.676990766 +0000 UTC m=+123.242928001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677035 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677052 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.677046407 +0000 UTC m=+123.242983632 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677080 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677096 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.677091189 +0000 UTC m=+123.243028414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677126 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:10:57.677162 master-0 kubenswrapper[3991]: E0308 03:10:57.677140 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.67713558 +0000 UTC m=+123.243072805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:10:57.777512 master-0 kubenswrapper[3991]: I0308 03:10:57.777391 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:10:57.777512 master-0 kubenswrapper[3991]: I0308 03:10:57.777514 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:10:57.777830 master-0 kubenswrapper[3991]: I0308 03:10:57.777582 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:10:57.777830 master-0 kubenswrapper[3991]: I0308 03:10:57.777645 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:10:57.777830 master-0 kubenswrapper[3991]: I0308 03:10:57.777686 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:10:57.777830 master-0 kubenswrapper[3991]: I0308 03:10:57.777792 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:10:57.778163 master-0 kubenswrapper[3991]: E0308 03:10:57.778000 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:10:57.778163 master-0 kubenswrapper[3991]: E0308 03:10:57.778073 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.778050917 +0000 UTC m=+123.343988182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:10:57.778327 master-0 kubenswrapper[3991]: E0308 03:10:57.778254 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:10:57.778406 master-0 kubenswrapper[3991]: E0308 03:10:57.778299 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:10:57.778406 master-0 kubenswrapper[3991]: E0308 03:10:57.778367 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:10:57.778406 master-0 kubenswrapper[3991]: E0308 03:10:57.778407 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778330 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.778311043 +0000 UTC m=+123.344248268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778449 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.778440417 +0000 UTC m=+123.344377642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778462 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.778455637 +0000 UTC m=+123.344392862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778472 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.778467537 +0000 UTC m=+123.344404762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778511 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:10:57.778653 master-0 kubenswrapper[3991]: E0308 03:10:57.778594 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:01.77855811 +0000 UTC m=+123.344495425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:10:59.251221 master-0 kubenswrapper[3991]: I0308 03:10:59.251048 3991 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" podStartSLOduration=86.251009353 podStartE2EDuration="1m26.251009353s" podCreationTimestamp="2026-03-08 03:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:10:55.804277624 +0000 UTC m=+117.370214859" watchObservedRunningTime="2026-03-08 03:10:59.251009353 +0000 UTC m=+120.816946588" Mar 08 03:11:01.719386 master-0 kubenswrapper[3991]: I0308 03:11:01.719282 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: I0308 03:11:01.719483 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.719490 3991 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.719634 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.719603061 +0000 UTC m=+131.285540326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.719688 3991 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.719812 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.719776626 +0000 UTC m=+131.285713891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: I0308 03:11:01.719931 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: I0308 03:11:01.720006 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.720191 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: I0308 03:11:01.720207 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:01.720393 master-0 kubenswrapper[3991]: E0308 03:11:01.720294 3991 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:01.720981 master-0 kubenswrapper[3991]: E0308 03:11:01.720302 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.720270199 +0000 UTC m=+131.286207454 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:01.720981 master-0 kubenswrapper[3991]: E0308 03:11:01.720294 3991 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:11:01.720981 master-0 kubenswrapper[3991]: E0308 03:11:01.720486 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.720453474 +0000 UTC m=+131.286390739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:01.720981 master-0 kubenswrapper[3991]: E0308 03:11:01.720516 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.720500815 +0000 UTC m=+131.286438070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:11:01.820928 master-0 kubenswrapper[3991]: I0308 03:11:01.820848 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:01.821370 master-0 kubenswrapper[3991]: E0308 03:11:01.821135 3991 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:01.821370 master-0 kubenswrapper[3991]: I0308 03:11:01.821186 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:01.821370 master-0 kubenswrapper[3991]: E0308 03:11:01.821250 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.821220146 +0000 UTC m=+131.387157411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:01.821370 master-0 kubenswrapper[3991]: I0308 03:11:01.821351 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:01.821370 master-0 kubenswrapper[3991]: E0308 03:11:01.821371 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: I0308 03:11:01.821442 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: I0308 03:11:01.821514 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: I0308 03:11:01.821593 3991 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: E0308 03:11:01.821622 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.821587706 +0000 UTC m=+131.387524961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: E0308 03:11:01.821647 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:01.821716 master-0 kubenswrapper[3991]: E0308 03:11:01.821700 3991 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821756 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.82171926 +0000 UTC m=+131.387656595 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821794 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.821777271 +0000 UTC m=+131.387714666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821801 3991 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821804 3991 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821847 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.821832293 +0000 UTC m=+131.387769558 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:01.822107 master-0 kubenswrapper[3991]: E0308 03:11:01.821870 3991 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.821855553 +0000 UTC m=+131.387792818 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:03.485014 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 08 03:11:03.503654 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 08 03:11:03.504226 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 08 03:11:03.507408 master-0 systemd[1]: kubelet.service: Consumed 9.974s CPU time. Mar 08 03:11:03.533379 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 03:11:03.650529 master-0 kubenswrapper[7387]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:11:03.651786 master-0 kubenswrapper[7387]: I0308 03:11:03.650589 7387 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653004 7387 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653021 7387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653025 7387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653029 7387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653033 7387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653038 7387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653042 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653046 7387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:11:03.653035 master-0 kubenswrapper[7387]: W0308 03:11:03.653051 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653056 7387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653060 7387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653065 7387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653070 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653074 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653078 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653082 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653086 7387 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653089 7387 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653093 7387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653097 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653101 7387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653104 7387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653108 7387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653111 7387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653115 7387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653118 7387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653121 7387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653125 7387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:11:03.653341 master-0 kubenswrapper[7387]: W0308 03:11:03.653130 7387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653134 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653137 7387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653141 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653145 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653148 7387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653152 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653156 7387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653159 7387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653163 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653167 7387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653170 7387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653174 7387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653179 7387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653183 7387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653187 7387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653191 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653195 7387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653199 7387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:11:03.653987 master-0 kubenswrapper[7387]: W0308 03:11:03.653202 7387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653206 7387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653209 7387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653213 7387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653218 7387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653222 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653226 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653230 7387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653234 7387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653237 7387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653241 7387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653244 7387 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653248 7387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653251 7387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653255 7387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653258 7387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653262 7387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653265 7387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653269 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:11:03.654566 master-0 kubenswrapper[7387]: W0308 03:11:03.653274 7387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: W0308 03:11:03.653278 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: W0308 03:11:03.653282 7387 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: W0308 03:11:03.653287 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: W0308 03:11:03.653296 7387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: W0308 03:11:03.653299 7387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653375 7387 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653384 7387 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653393 7387 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653400 7387 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653407 7387 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653412 7387 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653418 7387 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653424 7387 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653428 7387 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653432 7387 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653437 7387 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653442 7387 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653446 7387 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653451 7387 flags.go:64] FLAG: --cgroup-root="" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653460 7387 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653469 7387 flags.go:64] FLAG: --client-ca-file="" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653475 7387 flags.go:64] FLAG: --cloud-config="" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653480 7387 flags.go:64] FLAG: --cloud-provider="" Mar 08 03:11:03.655111 master-0 kubenswrapper[7387]: I0308 03:11:03.653485 7387 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653492 7387 flags.go:64] FLAG: --cluster-domain="" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653497 7387 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653502 7387 flags.go:64] FLAG: --config-dir="" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653507 7387 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653512 7387 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653520 7387 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653525 7387 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653530 7387 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653537 7387 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653544 7387 flags.go:64] FLAG: --contention-profiling="false" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653549 7387 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653558 7387 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653563 7387 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653568 7387 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653573 7387 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653577 7387 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653581 7387 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653585 7387 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653590 7387 flags.go:64] FLAG: --enable-server="true" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653594 7387 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653599 7387 flags.go:64] FLAG: --event-burst="100" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653604 7387 flags.go:64] FLAG: --event-qps="50" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653608 7387 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653612 7387 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 03:11:03.655884 master-0 kubenswrapper[7387]: I0308 03:11:03.653616 7387 flags.go:64] FLAG: --eviction-hard="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653621 7387 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653625 7387 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653629 7387 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653634 7387 flags.go:64] FLAG: --eviction-soft="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653638 7387 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653642 7387 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653646 7387 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653650 7387 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653654 7387 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653658 7387 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653662 7387 flags.go:64] FLAG: --feature-gates="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653668 7387 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653672 7387 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653677 7387 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653681 7387 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653685 7387 flags.go:64] FLAG: --healthz-port="10248" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653689 7387 flags.go:64] FLAG: --help="false" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653693 7387 flags.go:64] FLAG: --hostname-override="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653702 7387 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653707 7387 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653711 7387 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653715 7387 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653719 7387 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653723 7387 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 03:11:03.657419 master-0 kubenswrapper[7387]: I0308 03:11:03.653727 7387 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653731 7387 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653735 7387 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653739 7387 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653744 7387 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653748 7387 flags.go:64] FLAG: --kube-reserved="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653752 7387 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653756 7387 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653760 7387 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653764 7387 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653767 7387 flags.go:64] FLAG: --lock-file="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653772 7387 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653776 7387 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653780 7387 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653786 7387 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653790 7387 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653795 7387 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653799 7387 flags.go:64] FLAG: --logging-format="text" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653803 7387 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653807 7387 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653811 7387 flags.go:64] FLAG: --manifest-url="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653815 7387 flags.go:64] FLAG: --manifest-url-header="" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653821 7387 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653825 7387 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653830 7387 flags.go:64] FLAG: --max-pods="110" Mar 08 03:11:03.658127 master-0 kubenswrapper[7387]: I0308 03:11:03.653834 7387 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653840 7387 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653844 7387 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653848 7387 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653852 7387 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653857 7387 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653860 7387 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653870 7387 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653874 7387 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653878 7387 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653882 7387 flags.go:64] FLAG: --pod-cidr="" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653887 7387 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653894 7387 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653898 7387 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653919 7387 flags.go:64] FLAG: --pods-per-core="0" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653923 7387 flags.go:64] FLAG: --port="10250" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653927 7387 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653932 7387 flags.go:64] FLAG: --provider-id="" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653936 7387 flags.go:64] FLAG: --qos-reserved="" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653940 7387 flags.go:64] FLAG: --read-only-port="10255" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653945 7387 flags.go:64] FLAG: --register-node="true" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653949 7387 flags.go:64] FLAG: --register-schedulable="true" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653953 7387 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 03:11:03.658817 master-0 kubenswrapper[7387]: I0308 03:11:03.653961 7387 flags.go:64] FLAG: --registry-burst="10" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653965 7387 flags.go:64] FLAG: --registry-qps="5" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653969 7387 flags.go:64] FLAG: --reserved-cpus="" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653973 7387 flags.go:64] FLAG: --reserved-memory="" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653978 7387 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653983 7387 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653987 7387 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653992 7387 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.653996 7387 flags.go:64] FLAG: --runonce="false" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654001 7387 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654005 7387 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654011 7387 flags.go:64] FLAG: --seccomp-default="false" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654015 7387 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654019 7387 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654024 7387 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654028 7387 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654032 7387 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654036 7387 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654040 7387 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654044 7387 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654049 7387 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654053 7387 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654058 7387 flags.go:64] FLAG: --system-cgroups="" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654062 7387 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654068 7387 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 03:11:03.659476 master-0 kubenswrapper[7387]: I0308 03:11:03.654072 7387 flags.go:64] FLAG: --tls-cert-file="" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654076 7387 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654081 7387 flags.go:64] FLAG: --tls-min-version="" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654085 7387 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654089 7387 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654093 7387 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654097 7387 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654102 7387 flags.go:64] FLAG: --v="2" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654107 7387 flags.go:64] FLAG: --version="false" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654112 7387 flags.go:64] FLAG: --vmodule="" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654117 7387 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: I0308 03:11:03.654121 7387 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654255 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654260 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654265 7387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654268 7387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654272 7387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654275 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654279 7387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654284 7387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654289 7387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654294 7387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:11:03.660174 master-0 kubenswrapper[7387]: W0308 03:11:03.654298 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654302 7387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654306 7387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654310 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654313 7387 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654317 7387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654320 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654324 7387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654328 7387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654334 7387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654337 7387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654341 7387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654344 7387 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654348 7387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654351 7387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654355 7387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654359 7387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654362 7387 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654366 7387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654369 7387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654373 7387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:11:03.660794 master-0 kubenswrapper[7387]: W0308 03:11:03.654376 7387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654380 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654383 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654387 7387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654392 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654396 7387 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654400 7387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654404 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654408 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654413 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654418 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654423 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654427 7387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654432 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654437 7387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654442 7387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654446 7387 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654450 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654454 7387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:11:03.661446 master-0 kubenswrapper[7387]: W0308 03:11:03.654459 7387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654466 7387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654470 7387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654474 7387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654477 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654484 7387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654488 7387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654492 7387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654495 7387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654499 7387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654502 7387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654506 7387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654509 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654513 7387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654516 7387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654519 7387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654524 7387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654527 7387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654531 7387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654534 7387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:11:03.662061 master-0 kubenswrapper[7387]: W0308 03:11:03.654538 7387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:11:03.662479 master-0 kubenswrapper[7387]: W0308 03:11:03.654542 7387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:11:03.662479 master-0 kubenswrapper[7387]: I0308 03:11:03.654553 7387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:11:03.665228 master-0 kubenswrapper[7387]: I0308 03:11:03.665188 7387 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 03:11:03.665228 master-0 kubenswrapper[7387]: I0308 03:11:03.665219 7387 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665291 7387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665298 7387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665302 7387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665307 7387 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665310 7387 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:11:03.665313 master-0 kubenswrapper[7387]: W0308 03:11:03.665314 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665318 7387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665322 7387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665326 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665330 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665333 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665338 7387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665342 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665345 7387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665350 7387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665356 7387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665360 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665364 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665368 7387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665372 7387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665376 7387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665380 7387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665383 7387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665387 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:11:03.665444 master-0 kubenswrapper[7387]: W0308 03:11:03.665390 7387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665394 7387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665397 7387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665402 7387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665406 7387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665410 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665414 7387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665418 7387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665421 7387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665425 7387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665428 7387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665433 7387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665436 7387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665440 7387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665443 7387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665448 7387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665452 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665455 7387 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665459 7387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665462 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:11:03.665911 master-0 kubenswrapper[7387]: W0308 03:11:03.665466 7387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665470 7387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665473 7387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665477 7387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665481 7387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665484 7387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665488 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665491 7387 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665494 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665498 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665501 7387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665505 7387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665508 7387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665512 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665515 7387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665519 7387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665523 7387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665529 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665534 7387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665537 7387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:11:03.666338 master-0 kubenswrapper[7387]: W0308 03:11:03.665541 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665545 7387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665549 7387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665553 7387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665556 7387 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665560 7387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665564 7387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665570 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: I0308 03:11:03.665577 7387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665691 7387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665698 7387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665702 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665706 7387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665712 7387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:11:03.666858 master-0 kubenswrapper[7387]: W0308 03:11:03.665717 7387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665722 7387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665725 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665729 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665733 7387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665736 7387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665740 7387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665744 7387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665747 7387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665751 7387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665755 7387 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665758 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665762 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665765 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665769 7387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665772 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665776 7387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665779 7387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665783 7387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665786 7387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:11:03.667243 master-0 kubenswrapper[7387]: W0308 03:11:03.665790 7387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665793 7387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665797 7387 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665800 7387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665804 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665810 7387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665814 7387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665818 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665822 7387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665826 7387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665830 7387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665833 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665836 7387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665841 7387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665844 7387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665848 7387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665851 7387 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665855 7387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665858 7387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:11:03.667659 master-0 kubenswrapper[7387]: W0308 03:11:03.665861 7387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665865 7387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665869 7387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665872 7387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665876 7387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665879 7387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665883 7387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665887 7387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665890 7387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665894 7387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665900 7387 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665924 7387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665929 7387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665935 7387 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665939 7387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665944 7387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665950 7387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665954 7387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665959 7387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665963 7387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:11:03.668064 master-0 kubenswrapper[7387]: W0308 03:11:03.665967 7387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665971 7387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665975 7387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665979 7387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665982 7387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665986 7387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665989 7387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: W0308 03:11:03.665993 7387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.665999 7387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.666154 7387 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.667616 7387 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.667684 7387 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.667868 7387 server.go:997] "Starting client certificate rotation" Mar 08 03:11:03.668497 master-0 kubenswrapper[7387]: I0308 03:11:03.667876 7387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 03:11:03.668992 master-0 kubenswrapper[7387]: I0308 03:11:03.668114 7387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 23:38:24.109782435 +0000 UTC Mar 08 03:11:03.668992 master-0 kubenswrapper[7387]: I0308 03:11:03.668183 7387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h27m20.441601905s for next certificate rotation Mar 08 03:11:03.668992 master-0 kubenswrapper[7387]: I0308 03:11:03.668415 7387 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:11:03.669706 master-0 kubenswrapper[7387]: I0308 03:11:03.669679 7387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:11:03.674113 master-0 kubenswrapper[7387]: I0308 03:11:03.672346 7387 log.go:25] "Validated CRI v1 runtime API" Mar 08 03:11:03.674915 master-0 kubenswrapper[7387]: I0308 03:11:03.674842 7387 log.go:25] "Validated CRI v1 image API" Mar 08 03:11:03.677255 master-0 kubenswrapper[7387]: I0308 03:11:03.677116 7387 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 03:11:03.682730 master-0 kubenswrapper[7387]: I0308 03:11:03.682682 7387 fs.go:135] Filesystem UUIDs: map[0b52d2da-0de4-4c5d-93b4-a42985f64420:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 08 03:11:03.682966 master-0 kubenswrapper[7387]: I0308 03:11:03.682713 7387 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm major:0 minor:228 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg:{mountpoint:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg:{mountpoint:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j:{mountpoint:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4:{mountpoint:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1f7c9726-057b-4c5c-8a03-9bc407dedb9b/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1f7c9726-057b-4c5c-8a03-9bc407dedb9b/volumes/kubernetes.io~projected/kube-api-access major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp:{mountpoint:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl:{mountpoint:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng:{mountpoint:/var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7:{mountpoint:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7 major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j:{mountpoint:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz:{mountpoint:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425:{mountpoint:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425 major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9:{mountpoint:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv:{mountpoint:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt:{mountpoint:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~projected/kube-api-access-hl7m5:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~projected/kube-api-access-hl7m5 major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8:{mountpoint:/var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8 major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9:{mountpoint:/var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs:{mountpoint:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7:{mountpoint:/var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5f84bd4-2803-41ff-a1d1-a549991fe895/volumes/kubernetes.io~projected/kube-api-access-7v2gh:{mountpoint:/var/lib/kubelet/pods/d5f84bd4-2803-41ff-a1d1-a549991fe895/volumes/kubernetes.io~projected/kube-api-access-7v2gh major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2:{mountpoint:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2 major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5:{mountpoint:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5 major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4:{mountpoint:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv:{mountpoint:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj:{mountpoint:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p:{mountpoint:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p major:0 minor:252 fsType:tmpfs blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/ed4ceb0bf7ee197bbe517f84763840276d5d3458c0de9236cb4c125c0aa08877/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/5689510d7b898f3389fb75db8163fe7b275e90bcd4d5ecf5a2ae482bab7d5367/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/64e741437a938b8dde0692e97e97d5be86f1c586d4fb4ee6a89bc7c34fa8efcc/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/9e8a37843b53028b3e2c52c0d6d61b1f1ae808dafdc0a835241f3f9ffd231fb9/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/09592d1e24a6d95d2603666f65f7ea884b31db65ebe836ad8dc6a9e3cdbe985a/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/193af17e293f991c31e24667bf74a7f95ee71b9ae4526e9b23cbf46e51da0a7a/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/6ce88a6a4cf530f52e40d3c5b1c408b3703aec6836fc4d095b37b68d5f41dfda/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/a1df295eba3f2844ec53a3966350b20c7526a33c983ba41bcc1e800a75a41fd9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/c723ebc7f449989671d4fb7c855ce202ff344510a1599542347e5d08f891f77c/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/25002767fb608f673b61c99a596c81c0d0e7c1e443a841b95932ba4a854e4754/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/79d51a26cc268951514171ee03ef11b8e4e2d08b73bdcfe9ffb9c0507a5042ef/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/58e7d421126003c5e6697fd917ff7d81e17d1c02abd686531aae03dbea2555e7/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/fc39b9efe13e7c92d136b0296fc22352054c68baab4e9dc9721a0ce03daf74ad/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/80f4d29cee450cddd3a0ce6d6b046d7ad1348f00326842781307bd20b7485aad/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/e23bb6ac73a13f7773eb8112d2e8e6b2861a27ab1089adedb1b145bd25ad49fa/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/cc95ac09c8597c96850b9012ee0b895964857d58aa8318737d1dbca06d63fe71/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/339e31e71bda0061d0e64ca3ed96354c248b898e77ee1b278ce9d0c2ad4f05d8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/9ccb602c6fe0d50e59b6a08ac881b19df29109eb0779de0695638de3eaae3daa/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/3e8ac4325be8c86eb950c2b31367c33c3228f589ab4d3a7a066bb6c0b0502eb0/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/4cb9afad51fcb61d47a6d46f4d5a818051fb386063ddb84f2a39e3a3e934144a/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/8eaf8d12e9599ecaf1bd96a658238d730a2974926b2ad73c980633a98d3765e6/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/03be4afa51585932e1ab53893eb10145b02dafe8c3cc898b5ec9e4846681edea/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/64b9936ba91ca2bd4a32da5b28044bb5cbf5688ae42c63e8f5c908265dacf1a2/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/60973adef19c3dcb04fe88167ae33b3ba90d46fa31906ceac1a125aca96c749e/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/c6a49a0fa7068016c30f8433830253f65661642f1757bcddd471f717dfe6a9c3/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/f43358c3ea802de8961fd751683d6c0c7c1d845206e1717783deec507896b189/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/2b21278f9f61a867b9792d0d016094c580d0a3c87c36764d44b80409afe9d23c/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/038410bec10be52e15eac956c33b38c568e629809aef0d7cb3f31e5b0c31cee3/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/648396e8555649c7ad6f25332b118ada5744d6d1ec2059288eb6a5f3e387b50b/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/74b5c0311ece6873cd203ed01e069264c4535e89b881063b6d78f2ada3daead4/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/e66a3397c578cda0c5c74a35fa08363df4d636e87f209795899ef13ad7f0131c/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/27c7d091d7e55a9e1329c33f7e0bf0d7e26248519a1242ab90d2892999942a7a/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/d2adb0ccd23d27959db73dfe908f90bafd3dc2956d88d39a3c8554a4a7cf48fb/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/ce752a33c7df6cf1c040116eba442ee8d20260696d4bf082a9a65aa1c3c1d649/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/3d08ee46e095b20dc52fada7d1f8c4d0d0414e15c097d5920cea3b9b7b7042e8/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/f2349cb35cfb725bf225d40609943907c904e50a8b722e4b4b9d1c2382e601e3/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/4be34b5d0dd738ee09477fd6491dd2ee7e2f41a587ca6fbfbd6b1650af6c2d01/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/8e528056fc3e1c86fe58d6c3e080179a7ea80dcd79b1974635116f12c9f592d6/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/7c0109989687d25086f2e4674a26df17e36a79f9938e755208b11ef3840cefa7/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/181065cc841dac07ffd8634a0452eeb55719dc2dfb876d81082471a79fb5dfbb/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/1b006144b5ddaf0463624963c91229b12742cb80052e0134dd632bc5715a5645/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/e48a62baa07dbc700ebb292f40c08a5facf63eb10e5e6daed754390e81c43a1c/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/2aeea86c0b65f151ffeaa56c8621e02189420931452d3395f10d9b958aab2d6f/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/830e79db0930df9c4067a1d177e2708eb6762399bff25d8b75d79c7cea646aa8/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/393b0b00f379623d3a46dfd087815c7400555b285d12a5f4470a0a9f974c07c0/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/153f0e93fe45d239e54191661069e5dc1b15a1681af36a7e2706a15ff1b2dd52/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/a901e831e3088ced719e54fec4927b146e60e3f947e58485d4313652629146ac/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/9ea7d825909077968f7c0e3471bcd77eebc9a779602f1f6ef1da65cfca8752e8/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 08 03:11:03.708571 master-0 kubenswrapper[7387]: I0308 03:11:03.707847 7387 manager.go:217] Machine: {Timestamp:2026-03-08 03:11:03.706112479 +0000 UTC m=+0.100588200 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ca41eca1edff4210bb11657bca9f1e6d SystemUUID:ca41eca1-edff-4210-bb11-657bca9f1e6d BootID:c341f940-4e88-4b9b-a4b4-98442bfad22d Filesystems:[{Device:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9 DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4 DeviceMajor:0 DeviceMinor:262 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8 DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4 DeviceMajor:0 DeviceMinor:272 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~projected/kube-api-access-hl7m5 DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl DeviceMajor:0 DeviceMinor:264 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7 DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:255 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5 DeviceMajor:0 DeviceMinor:261 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:260 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb DeviceMajor:0 DeviceMinor:263 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1f7c9726-057b-4c5c-8a03-9bc407dedb9b/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:99 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425 DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5f84bd4-2803-41ff-a1d1-a549991fe895/volumes/kubernetes.io~projected/kube-api-access-7v2gh DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:140 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j DeviceMajor:0 DeviceMinor:148 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9 DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2 DeviceMajor:0 DeviceMinor:269 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv DeviceMajor:0 DeviceMinor:91 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7 DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm DeviceMajor:0 DeviceMinor:228 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0a9eb19952ec20b MacAddress:a6:c1:f4:5f:da:8e Speed:10000 Mtu:8900} {Name:1232aad59560937 MacAddress:4e:a3:1c:98:39:24 Speed:10000 Mtu:8900} {Name:17b37add10475bc MacAddress:d6:8a:bc:bc:ba:95 Speed:10000 Mtu:8900} {Name:32cd08c82c3a978 MacAddress:92:60:aa:d4:d0:55 Speed:10000 Mtu:8900} {Name:33abd37edec3b66 MacAddress:ba:ac:0b:3d:68:f3 Speed:10000 Mtu:8900} {Name:3656e53b736cafa MacAddress:fe:ac:ce:8e:e0:ee Speed:10000 Mtu:8900} {Name:78bd83c51ec0b72 MacAddress:a2:da:14:5d:9d:c8 Speed:10000 Mtu:8900} {Name:975b4d0b44381f6 MacAddress:ca:98:6e:41:d9:40 Speed:10000 Mtu:8900} {Name:b47ec9397846833 MacAddress:8e:cb:62:f2:3e:e5 Speed:10000 Mtu:8900} {Name:b835d8031dbcbc0 MacAddress:22:c3:65:27:a8:09 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:02:98:c1:3d:60:73 Speed:0 Mtu:8900} {Name:c6d3624a26cf17e MacAddress:82:97:a8:5b:69:17 Speed:10000 Mtu:8900} {Name:d577cf22293cc3e MacAddress:96:96:df:d1:38:3f Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b5:5c:2e Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:da:1c:db:80:ac:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 03:11:03.708571 master-0 kubenswrapper[7387]: I0308 03:11:03.708544 7387 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 03:11:03.709012 master-0 kubenswrapper[7387]: I0308 03:11:03.708740 7387 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 03:11:03.709148 master-0 kubenswrapper[7387]: I0308 03:11:03.709115 7387 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 03:11:03.709338 master-0 kubenswrapper[7387]: I0308 03:11:03.709300 7387 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 03:11:03.709581 master-0 kubenswrapper[7387]: I0308 03:11:03.709326 7387 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 03:11:03.709640 master-0 kubenswrapper[7387]: I0308 03:11:03.709623 7387 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 03:11:03.709640 master-0 kubenswrapper[7387]: I0308 03:11:03.709635 7387 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 03:11:03.709703 master-0 kubenswrapper[7387]: I0308 03:11:03.709643 7387 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:11:03.709703 master-0 kubenswrapper[7387]: I0308 03:11:03.709666 7387 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:11:03.709796 master-0 kubenswrapper[7387]: I0308 03:11:03.709777 7387 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:11:03.709870 master-0 kubenswrapper[7387]: I0308 03:11:03.709852 7387 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 03:11:03.711837 master-0 kubenswrapper[7387]: I0308 03:11:03.711002 7387 kubelet.go:418] "Attempting to sync node with API server" Mar 08 03:11:03.711837 master-0 kubenswrapper[7387]: I0308 03:11:03.711038 7387 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 03:11:03.711837 master-0 kubenswrapper[7387]: I0308 03:11:03.711068 7387 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 03:11:03.711837 master-0 kubenswrapper[7387]: I0308 03:11:03.711082 7387 kubelet.go:324] "Adding apiserver pod source" Mar 08 03:11:03.711837 master-0 kubenswrapper[7387]: I0308 03:11:03.711102 7387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 03:11:03.712340 master-0 kubenswrapper[7387]: I0308 03:11:03.712199 7387 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 03:11:03.712340 master-0 kubenswrapper[7387]: I0308 03:11:03.712333 7387 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 08 03:11:03.712583 master-0 kubenswrapper[7387]: I0308 03:11:03.712566 7387 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 03:11:03.712711 master-0 kubenswrapper[7387]: I0308 03:11:03.712690 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 03:11:03.712711 master-0 kubenswrapper[7387]: I0308 03:11:03.712710 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712717 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712725 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712732 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712739 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712745 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712752 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712760 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 03:11:03.712776 master-0 kubenswrapper[7387]: I0308 03:11:03.712767 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 03:11:03.713003 master-0 kubenswrapper[7387]: I0308 03:11:03.712790 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 03:11:03.713003 master-0 kubenswrapper[7387]: I0308 03:11:03.712803 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 03:11:03.713003 master-0 kubenswrapper[7387]: I0308 03:11:03.712838 7387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 03:11:03.713714 master-0 kubenswrapper[7387]: I0308 03:11:03.713204 7387 server.go:1280] "Started kubelet" Mar 08 03:11:03.713714 master-0 kubenswrapper[7387]: I0308 03:11:03.713299 7387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 03:11:03.713714 master-0 kubenswrapper[7387]: I0308 03:11:03.713363 7387 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 03:11:03.713714 master-0 kubenswrapper[7387]: I0308 03:11:03.713310 7387 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 03:11:03.713714 master-0 kubenswrapper[7387]: I0308 03:11:03.713669 7387 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 03:11:03.714170 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 03:11:03.715515 master-0 kubenswrapper[7387]: I0308 03:11:03.715485 7387 server.go:449] "Adding debug handlers to kubelet server" Mar 08 03:11:03.723531 master-0 kubenswrapper[7387]: I0308 03:11:03.722493 7387 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 03:11:03.723531 master-0 kubenswrapper[7387]: I0308 03:11:03.722751 7387 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 03:11:03.725958 master-0 kubenswrapper[7387]: I0308 03:11:03.725926 7387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 03:11:03.726040 master-0 kubenswrapper[7387]: I0308 03:11:03.725967 7387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 03:11:03.726040 master-0 kubenswrapper[7387]: I0308 03:11:03.725990 7387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 22:18:11.571481795 +0000 UTC Mar 08 03:11:03.726040 master-0 kubenswrapper[7387]: I0308 03:11:03.726034 7387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h7m7.845449992s for next certificate rotation Mar 08 03:11:03.726949 master-0 kubenswrapper[7387]: I0308 03:11:03.726454 7387 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 03:11:03.726949 master-0 kubenswrapper[7387]: I0308 03:11:03.726477 7387 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 03:11:03.726949 master-0 kubenswrapper[7387]: I0308 03:11:03.726488 7387 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 03:11:03.727194 master-0 kubenswrapper[7387]: I0308 03:11:03.727166 7387 factory.go:55] Registering systemd factory Mar 08 03:11:03.727194 master-0 kubenswrapper[7387]: I0308 03:11:03.727190 7387 factory.go:221] Registration of the systemd container factory successfully Mar 08 03:11:03.727417 master-0 kubenswrapper[7387]: I0308 03:11:03.727397 7387 factory.go:153] Registering CRI-O factory Mar 08 03:11:03.727417 master-0 kubenswrapper[7387]: I0308 03:11:03.727406 7387 factory.go:221] Registration of the crio container factory successfully Mar 08 03:11:03.727507 master-0 kubenswrapper[7387]: I0308 03:11:03.727459 7387 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 03:11:03.727507 master-0 kubenswrapper[7387]: I0308 03:11:03.727479 7387 factory.go:103] Registering Raw factory Mar 08 03:11:03.727507 master-0 kubenswrapper[7387]: I0308 03:11:03.727491 7387 manager.go:1196] Started watching for new ooms in manager Mar 08 03:11:03.728949 master-0 kubenswrapper[7387]: I0308 03:11:03.727892 7387 manager.go:319] Starting recovery of all containers Mar 08 03:11:03.728949 master-0 kubenswrapper[7387]: I0308 03:11:03.728718 7387 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 03:11:03.732313 master-0 kubenswrapper[7387]: I0308 03:11:03.732259 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert" seLinuxMountContext="" Mar 08 03:11:03.732313 master-0 kubenswrapper[7387]: I0308 03:11:03.732302 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" volumeName="kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732318 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89fc77c9-b444-4828-8a35-c63ea9335245" volumeName="kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732333 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732345 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732356 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732366 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732378 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp" seLinuxMountContext="" Mar 08 03:11:03.732392 master-0 kubenswrapper[7387]: I0308 03:11:03.732392 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d69f101-60a8-41fd-bcda-4eb654c626a2" volumeName="kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732404 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732415 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732425 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732435 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732446 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732456 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732464 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732474 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732482 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732492 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732501 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732509 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef16d7ae-66aa-45d4-b1a6-1327738a46bb" volumeName="kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732518 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732526 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" volumeName="kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732534 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732544 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732553 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732563 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732574 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access" seLinuxMountContext="" Mar 08 03:11:03.732568 master-0 kubenswrapper[7387]: I0308 03:11:03.732585 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732597 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732607 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732616 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732625 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732634 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732643 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732668 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed56c17f-7e15-4776-80a6-3ef091307e89" volumeName="kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732683 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732692 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732701 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732710 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732720 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732730 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732740 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aadf7b67-db33-4392-81f5-1b93eef54545" volumeName="kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732750 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" volumeName="kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732759 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732769 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732779 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732788 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed56c17f-7e15-4776-80a6-3ef091307e89" volumeName="kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732797 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732806 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732815 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732823 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732836 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732959 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732977 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.732989 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" volumeName="kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733001 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733013 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6ee6202-11e5-4586-ae46-075da1ad7f1a" volumeName="kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733024 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733034 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733046 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy" seLinuxMountContext="" Mar 08 03:11:03.733058 master-0 kubenswrapper[7387]: I0308 03:11:03.733061 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733176 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733286 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733300 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733336 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733347 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733359 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733432 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aadf7b67-db33-4392-81f5-1b93eef54545" volumeName="kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733452 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733464 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5f84bd4-2803-41ff-a1d1-a549991fe895" volumeName="kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733475 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" volumeName="kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733487 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733497 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733508 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a92a557-d023-4531-b3a3-e559af0fe358" volumeName="kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733519 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89fc77c9-b444-4828-8a35-c63ea9335245" volumeName="kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733529 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733540 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733552 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d68278f6-59d5-4bbf-b969-e47635ffd4cc" volumeName="kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733564 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733577 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733588 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733599 7387 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg" seLinuxMountContext="" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733609 7387 reconstruct.go:97] "Volume reconstruction finished" Mar 08 03:11:03.733793 master-0 kubenswrapper[7387]: I0308 03:11:03.733616 7387 reconciler.go:26] "Reconciler: start to sync state" Mar 08 03:11:03.737815 master-0 kubenswrapper[7387]: I0308 03:11:03.737191 7387 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 08 03:11:03.754931 master-0 kubenswrapper[7387]: I0308 03:11:03.754841 7387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 03:11:03.758524 master-0 kubenswrapper[7387]: I0308 03:11:03.758487 7387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 03:11:03.758524 master-0 kubenswrapper[7387]: I0308 03:11:03.758522 7387 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 03:11:03.758601 master-0 kubenswrapper[7387]: I0308 03:11:03.758543 7387 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 03:11:03.758655 master-0 kubenswrapper[7387]: E0308 03:11:03.758620 7387 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 08 03:11:03.760297 master-0 kubenswrapper[7387]: I0308 03:11:03.760250 7387 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 03:11:03.768215 master-0 kubenswrapper[7387]: I0308 03:11:03.768162 7387 generic.go:334] "Generic (PLEG): container finished" podID="cb1042c7-d08a-436c-a737-11573992faff" containerID="8f306ce0a691aaca594f05377489d0fedf338512ca0fc5f460eabd4f8b2245d1" exitCode=0 Mar 08 03:11:03.770785 master-0 kubenswrapper[7387]: I0308 03:11:03.770756 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 03:11:03.771208 master-0 kubenswrapper[7387]: I0308 03:11:03.771164 7387 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" exitCode=1 Mar 08 03:11:03.771257 master-0 kubenswrapper[7387]: I0308 03:11:03.771227 7387 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168" exitCode=0 Mar 08 03:11:03.784501 master-0 kubenswrapper[7387]: I0308 03:11:03.784100 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="78b54e7113882d3d58fadca33d022029333723850c915170784718d6b2d76fb0" exitCode=0 Mar 08 03:11:03.784501 master-0 kubenswrapper[7387]: I0308 03:11:03.784488 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c5b6441f57692234cdd23b54b466923a1bdca368557471aa9c56fb86e4cb27c5" exitCode=0 Mar 08 03:11:03.784501 master-0 kubenswrapper[7387]: I0308 03:11:03.784500 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="e69760dd587dd773054d2c68d80450fae7ea78d2c0d9ae71eb6479ccbfb89605" exitCode=0 Mar 08 03:11:03.784671 master-0 kubenswrapper[7387]: I0308 03:11:03.784510 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="23e3dd34f3f6fc9e0e38ff8f0cff6316ca3075b2e57bb67cfa5a7c613c4186a1" exitCode=0 Mar 08 03:11:03.784671 master-0 kubenswrapper[7387]: I0308 03:11:03.784519 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c9ed066ab454b7a45ceb4d194fe0690fb319c3957701da913065477256cffc60" exitCode=0 Mar 08 03:11:03.784671 master-0 kubenswrapper[7387]: I0308 03:11:03.784528 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c819f7232b6c404b174ef7e43a5fe243e69bdbd6f882a1b6a72687cf4603a3a5" exitCode=0 Mar 08 03:11:03.795021 master-0 kubenswrapper[7387]: I0308 03:11:03.794981 7387 generic.go:334] "Generic (PLEG): container finished" podID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerID="fa364304eb5003254684c63c5eb9681efe16b224f31c3dd661492ecd5fa5deda" exitCode=0 Mar 08 03:11:03.801883 master-0 kubenswrapper[7387]: I0308 03:11:03.801850 7387 generic.go:334] "Generic (PLEG): container finished" podID="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" containerID="3c3d9e33877d35a402198be63a50621dbf8be27a97d9c8596143b4df8d2863cd" exitCode=0 Mar 08 03:11:03.804385 master-0 kubenswrapper[7387]: I0308 03:11:03.804359 7387 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91" exitCode=0 Mar 08 03:11:03.822050 master-0 kubenswrapper[7387]: I0308 03:11:03.821969 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10" exitCode=1 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822139 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" exitCode=0 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822160 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" exitCode=0 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822173 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df" exitCode=0 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822187 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b" exitCode=143 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822199 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c" exitCode=143 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822211 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661" exitCode=143 Mar 08 03:11:03.822224 master-0 kubenswrapper[7387]: I0308 03:11:03.822223 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71" exitCode=143 Mar 08 03:11:03.822436 master-0 kubenswrapper[7387]: I0308 03:11:03.822235 7387 generic.go:334] "Generic (PLEG): container finished" podID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerID="be2882c714bad91ca07c5f4fb9d9845ae081aa06f8fae77c04d5d862e91663ab" exitCode=0 Mar 08 03:11:03.831508 master-0 kubenswrapper[7387]: I0308 03:11:03.831471 7387 manager.go:324] Recovery completed Mar 08 03:11:03.859647 master-0 kubenswrapper[7387]: E0308 03:11:03.859594 7387 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 03:11:03.869080 master-0 kubenswrapper[7387]: I0308 03:11:03.869045 7387 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 03:11:03.869080 master-0 kubenswrapper[7387]: I0308 03:11:03.869070 7387 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 03:11:03.869192 master-0 kubenswrapper[7387]: I0308 03:11:03.869149 7387 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:11:03.869558 master-0 kubenswrapper[7387]: I0308 03:11:03.869530 7387 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 08 03:11:03.869587 master-0 kubenswrapper[7387]: I0308 03:11:03.869550 7387 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 08 03:11:03.869614 master-0 kubenswrapper[7387]: I0308 03:11:03.869591 7387 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 08 03:11:03.869614 master-0 kubenswrapper[7387]: I0308 03:11:03.869599 7387 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 08 03:11:03.869614 master-0 kubenswrapper[7387]: I0308 03:11:03.869605 7387 policy_none.go:49] "None policy: Start" Mar 08 03:11:03.871373 master-0 kubenswrapper[7387]: I0308 03:11:03.871308 7387 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 03:11:03.871373 master-0 kubenswrapper[7387]: I0308 03:11:03.871353 7387 state_mem.go:35] "Initializing new in-memory state store" Mar 08 03:11:03.871656 master-0 kubenswrapper[7387]: I0308 03:11:03.871637 7387 state_mem.go:75] "Updated machine memory state" Mar 08 03:11:03.871656 master-0 kubenswrapper[7387]: I0308 03:11:03.871653 7387 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 08 03:11:03.881595 master-0 kubenswrapper[7387]: I0308 03:11:03.881573 7387 manager.go:334] "Starting Device Plugin manager" Mar 08 03:11:03.881651 master-0 kubenswrapper[7387]: I0308 03:11:03.881629 7387 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 03:11:03.881651 master-0 kubenswrapper[7387]: I0308 03:11:03.881649 7387 server.go:79] "Starting device plugin registration server" Mar 08 03:11:03.882034 master-0 kubenswrapper[7387]: I0308 03:11:03.882017 7387 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 03:11:03.882100 master-0 kubenswrapper[7387]: I0308 03:11:03.882064 7387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 03:11:03.882647 master-0 kubenswrapper[7387]: I0308 03:11:03.882625 7387 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 03:11:03.882751 master-0 kubenswrapper[7387]: I0308 03:11:03.882733 7387 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 03:11:03.882751 master-0 kubenswrapper[7387]: I0308 03:11:03.882748 7387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 03:11:03.982334 master-0 kubenswrapper[7387]: I0308 03:11:03.982265 7387 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:11:03.984053 master-0 kubenswrapper[7387]: I0308 03:11:03.984017 7387 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:11:03.984090 master-0 kubenswrapper[7387]: I0308 03:11:03.984056 7387 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:11:03.984090 master-0 kubenswrapper[7387]: I0308 03:11:03.984070 7387 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:11:03.984137 master-0 kubenswrapper[7387]: I0308 03:11:03.984117 7387 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:11:03.993580 master-0 kubenswrapper[7387]: I0308 03:11:03.993509 7387 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 08 03:11:03.993629 master-0 kubenswrapper[7387]: I0308 03:11:03.993611 7387 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 03:11:04.060108 master-0 kubenswrapper[7387]: I0308 03:11:04.059748 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060373 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74247a24bee81923a49c76bb5a3351b35d692a56184ad3e7d459ca63e5984aec" Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060399 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"94f9825100c515930737671c9db902b97098151c7357d0a97122a599d22e13f1"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060448 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060462 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060474 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060488 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060519 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060548 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060570 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060582 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"00d76aa6e00e12ac364afa83e5fd631d414e7872b31bf1feb62fc1d452ac8d6a"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060598 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060610 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060624 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060639 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc" Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060680 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060692 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"bf4fabb9c08963210bf1f0d197a394d399879939569bdcc3789dd4b487cec36f"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060706 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060719 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318"} Mar 08 03:11:04.060791 master-0 kubenswrapper[7387]: I0308 03:11:04.060760 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a403ced26061f4a57952fc11b7d80ef9ddbc18727f66e65a74c804b23d6d97" Mar 08 03:11:04.071674 master-0 kubenswrapper[7387]: E0308 03:11:04.071632 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.078345 master-0 kubenswrapper[7387]: W0308 03:11:04.078322 7387 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 03:11:04.078484 master-0 kubenswrapper[7387]: E0308 03:11:04.078355 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.079938 master-0 kubenswrapper[7387]: E0308 03:11:04.079915 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.080013 master-0 kubenswrapper[7387]: E0308 03:11:04.079982 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.080150 master-0 kubenswrapper[7387]: E0308 03:11:04.080110 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139466 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139508 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139526 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139549 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139567 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139585 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139598 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139616 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.139663 master-0 kubenswrapper[7387]: I0308 03:11:04.139635 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139698 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139744 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139765 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139782 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139797 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139817 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139831 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.139959 master-0 kubenswrapper[7387]: I0308 03:11:04.139845 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.241035 master-0 kubenswrapper[7387]: I0308 03:11:04.240995 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.241035 master-0 kubenswrapper[7387]: I0308 03:11:04.241032 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.241225 master-0 kubenswrapper[7387]: I0308 03:11:04.241189 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.241385 master-0 kubenswrapper[7387]: I0308 03:11:04.241273 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.241443 master-0 kubenswrapper[7387]: I0308 03:11:04.241421 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.241476 master-0 kubenswrapper[7387]: I0308 03:11:04.241446 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241303 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241702 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241686 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241321 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241778 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241833 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241854 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241859 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241885 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241874 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241935 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241930 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241978 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.241979 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242005 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242012 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242028 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242036 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242043 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242070 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242084 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242100 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242103 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242115 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242129 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242148 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242058 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.242196 master-0 kubenswrapper[7387]: I0308 03:11:04.242175 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.639356 master-0 kubenswrapper[7387]: I0308 03:11:04.639219 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:04.712077 master-0 kubenswrapper[7387]: I0308 03:11:04.711981 7387 apiserver.go:52] "Watching apiserver" Mar 08 03:11:04.721357 master-0 kubenswrapper[7387]: I0308 03:11:04.721322 7387 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 03:11:04.726068 master-0 kubenswrapper[7387]: I0308 03:11:04.725159 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-rtvl6","openshift-ingress-operator/ingress-operator-677db989d6-4bpl8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr","openshift-multus/multus-additional-cni-plugins-c8gc6","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4","openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp","openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf","openshift-network-operator/iptables-alerter-fpxrc","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw","openshift-dns-operator/dns-operator-589895fbb7-9mhwc","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq","openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf","openshift-network-diagnostics/network-check-target-4lx8s","openshift-ovn-kubernetes/ovnkube-node-jq7bv","openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7","openshift-multus/multus-admission-controller-8d675b596-xhkzl","openshift-multus/multus-jzw4f","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx","openshift-multus/network-metrics-daemon-2l64n","openshift-network-node-identity/network-node-identity-ppdzb","openshift-network-operator/network-operator-7c649bf6d4-wxrfp","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8"] Mar 08 03:11:04.726068 master-0 kubenswrapper[7387]: I0308 03:11:04.725447 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:11:04.727756 master-0 kubenswrapper[7387]: I0308 03:11:04.727546 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:04.729075 master-0 kubenswrapper[7387]: I0308 03:11:04.727835 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.729075 master-0 kubenswrapper[7387]: I0308 03:11:04.727840 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 03:11:04.729650 master-0 kubenswrapper[7387]: I0308 03:11:04.729424 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.729714 master-0 kubenswrapper[7387]: I0308 03:11:04.729682 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 03:11:04.730483 master-0 kubenswrapper[7387]: I0308 03:11:04.730459 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 03:11:04.730557 master-0 kubenswrapper[7387]: I0308 03:11:04.730543 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.730768 master-0 kubenswrapper[7387]: I0308 03:11:04.730468 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 03:11:04.731121 master-0 kubenswrapper[7387]: I0308 03:11:04.731100 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 03:11:04.731314 master-0 kubenswrapper[7387]: I0308 03:11:04.731291 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 03:11:04.731362 master-0 kubenswrapper[7387]: I0308 03:11:04.731341 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 03:11:04.731436 master-0 kubenswrapper[7387]: I0308 03:11:04.731304 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.731695 master-0 kubenswrapper[7387]: I0308 03:11:04.731535 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 03:11:04.731695 master-0 kubenswrapper[7387]: I0308 03:11:04.731545 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 03:11:04.731695 master-0 kubenswrapper[7387]: I0308 03:11:04.731580 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.731695 master-0 kubenswrapper[7387]: I0308 03:11:04.731613 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 03:11:04.732164 master-0 kubenswrapper[7387]: I0308 03:11:04.732126 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 03:11:04.732281 master-0 kubenswrapper[7387]: I0308 03:11:04.732258 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733481 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733484 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733532 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733588 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733704 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.733886 master-0 kubenswrapper[7387]: I0308 03:11:04.733851 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.734186 master-0 kubenswrapper[7387]: I0308 03:11:04.734157 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734514 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734566 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734704 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734731 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734811 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 03:11:04.734889 master-0 kubenswrapper[7387]: I0308 03:11:04.734857 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:11:04.735242 master-0 kubenswrapper[7387]: I0308 03:11:04.735227 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:04.735537 master-0 kubenswrapper[7387]: I0308 03:11:04.735402 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:04.735715 master-0 kubenswrapper[7387]: I0308 03:11:04.735676 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:04.737809 master-0 kubenswrapper[7387]: I0308 03:11:04.737235 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.737809 master-0 kubenswrapper[7387]: I0308 03:11:04.737485 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 03:11:04.737809 master-0 kubenswrapper[7387]: I0308 03:11:04.737591 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 03:11:04.737809 master-0 kubenswrapper[7387]: I0308 03:11:04.737704 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 03:11:04.738097 master-0 kubenswrapper[7387]: I0308 03:11:04.738042 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:11:04.738166 master-0 kubenswrapper[7387]: I0308 03:11:04.738121 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 03:11:04.738433 master-0 kubenswrapper[7387]: I0308 03:11:04.738418 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 03:11:04.738700 master-0 kubenswrapper[7387]: I0308 03:11:04.738663 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.738844 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.739034 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.739668 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.739753 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740123 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740231 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740352 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740396 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740441 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740513 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740652 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740735 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740828 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 03:11:04.740896 master-0 kubenswrapper[7387]: I0308 03:11:04.740841 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 03:11:04.742731 master-0 kubenswrapper[7387]: I0308 03:11:04.740858 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 03:11:04.743116 master-0 kubenswrapper[7387]: I0308 03:11:04.743073 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 03:11:04.743209 master-0 kubenswrapper[7387]: I0308 03:11:04.743184 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 03:11:04.743572 master-0 kubenswrapper[7387]: I0308 03:11:04.743548 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 03:11:04.744027 master-0 kubenswrapper[7387]: I0308 03:11:04.744007 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:11:04.744519 master-0 kubenswrapper[7387]: I0308 03:11:04.744475 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 03:11:04.744776 master-0 kubenswrapper[7387]: I0308 03:11:04.744732 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 03:11:04.745027 master-0 kubenswrapper[7387]: I0308 03:11:04.745006 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 03:11:04.745318 master-0 kubenswrapper[7387]: I0308 03:11:04.745282 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 03:11:04.745495 master-0 kubenswrapper[7387]: I0308 03:11:04.745451 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 03:11:04.745999 master-0 kubenswrapper[7387]: I0308 03:11:04.745852 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:11:04.745999 master-0 kubenswrapper[7387]: I0308 03:11:04.745884 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 03:11:04.746347 master-0 kubenswrapper[7387]: I0308 03:11:04.746326 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 03:11:04.746411 master-0 kubenswrapper[7387]: I0308 03:11:04.746391 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 03:11:04.746756 master-0 kubenswrapper[7387]: I0308 03:11:04.746740 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 03:11:04.746799 master-0 kubenswrapper[7387]: I0308 03:11:04.746772 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 03:11:04.747044 master-0 kubenswrapper[7387]: I0308 03:11:04.747020 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 03:11:04.747044 master-0 kubenswrapper[7387]: I0308 03:11:04.747021 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 03:11:04.747305 master-0 kubenswrapper[7387]: I0308 03:11:04.747283 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 03:11:04.747432 master-0 kubenswrapper[7387]: I0308 03:11:04.747413 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 03:11:04.747602 master-0 kubenswrapper[7387]: I0308 03:11:04.747582 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 03:11:04.747784 master-0 kubenswrapper[7387]: I0308 03:11:04.747752 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.747844 master-0 kubenswrapper[7387]: I0308 03:11:04.747829 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 03:11:04.748073 master-0 kubenswrapper[7387]: I0308 03:11:04.748052 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 03:11:04.748334 master-0 kubenswrapper[7387]: I0308 03:11:04.748307 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.748391 master-0 kubenswrapper[7387]: I0308 03:11:04.748317 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 03:11:04.748639 master-0 kubenswrapper[7387]: I0308 03:11:04.748613 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.748757 master-0 kubenswrapper[7387]: I0308 03:11:04.748741 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.748862 master-0 kubenswrapper[7387]: I0308 03:11:04.748843 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.748997 master-0 kubenswrapper[7387]: I0308 03:11:04.748978 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:11:04.749094 master-0 kubenswrapper[7387]: I0308 03:11:04.749077 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.749226 master-0 kubenswrapper[7387]: I0308 03:11:04.749208 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.749334 master-0 kubenswrapper[7387]: I0308 03:11:04.749316 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.749447 master-0 kubenswrapper[7387]: I0308 03:11:04.749429 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.749538 master-0 kubenswrapper[7387]: I0308 03:11:04.749514 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 03:11:04.749618 master-0 kubenswrapper[7387]: I0308 03:11:04.749602 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.749707 master-0 kubenswrapper[7387]: I0308 03:11:04.749691 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.749933 master-0 kubenswrapper[7387]: I0308 03:11:04.749894 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 03:11:04.750233 master-0 kubenswrapper[7387]: I0308 03:11:04.750215 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.750664 master-0 kubenswrapper[7387]: I0308 03:11:04.750637 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 03:11:04.750774 master-0 kubenswrapper[7387]: I0308 03:11:04.750754 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.750824 master-0 kubenswrapper[7387]: I0308 03:11:04.750792 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.750865 master-0 kubenswrapper[7387]: I0308 03:11:04.750850 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 03:11:04.750958 master-0 kubenswrapper[7387]: I0308 03:11:04.750938 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.751206 master-0 kubenswrapper[7387]: I0308 03:11:04.751174 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:11:04.751339 master-0 kubenswrapper[7387]: I0308 03:11:04.751315 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 03:11:04.751438 master-0 kubenswrapper[7387]: I0308 03:11:04.751410 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:11:04.751545 master-0 kubenswrapper[7387]: I0308 03:11:04.751525 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.752054 master-0 kubenswrapper[7387]: I0308 03:11:04.752028 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.752119 master-0 kubenswrapper[7387]: I0308 03:11:04.752074 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.752159 master-0 kubenswrapper[7387]: I0308 03:11:04.752137 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.761947 master-0 kubenswrapper[7387]: I0308 03:11:04.761130 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 03:11:04.761947 master-0 kubenswrapper[7387]: I0308 03:11:04.761492 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 03:11:04.761947 master-0 kubenswrapper[7387]: I0308 03:11:04.761511 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 03:11:04.761947 master-0 kubenswrapper[7387]: I0308 03:11:04.761655 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.761947 master-0 kubenswrapper[7387]: I0308 03:11:04.761743 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 03:11:04.762100 master-0 kubenswrapper[7387]: I0308 03:11:04.761971 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 03:11:04.762100 master-0 kubenswrapper[7387]: I0308 03:11:04.762028 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 03:11:04.762224 master-0 kubenswrapper[7387]: I0308 03:11:04.762185 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 03:11:04.762802 master-0 kubenswrapper[7387]: I0308 03:11:04.762727 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 03:11:04.766856 master-0 kubenswrapper[7387]: I0308 03:11:04.766556 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 03:11:04.766856 master-0 kubenswrapper[7387]: I0308 03:11:04.766674 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 03:11:04.767207 master-0 kubenswrapper[7387]: I0308 03:11:04.767172 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 03:11:04.770087 master-0 kubenswrapper[7387]: I0308 03:11:04.770062 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 03:11:04.772197 master-0 kubenswrapper[7387]: I0308 03:11:04.772129 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 03:11:04.772874 master-0 kubenswrapper[7387]: I0308 03:11:04.772843 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 03:11:04.774366 master-0 kubenswrapper[7387]: I0308 03:11:04.774345 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 03:11:04.779737 master-0 kubenswrapper[7387]: I0308 03:11:04.779698 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 03:11:04.780621 master-0 kubenswrapper[7387]: I0308 03:11:04.780511 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 03:11:04.780621 master-0 kubenswrapper[7387]: I0308 03:11:04.780535 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 03:11:04.780621 master-0 kubenswrapper[7387]: I0308 03:11:04.780554 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 03:11:04.780621 master-0 kubenswrapper[7387]: I0308 03:11:04.780583 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 03:11:04.781625 master-0 kubenswrapper[7387]: I0308 03:11:04.780628 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 03:11:04.781625 master-0 kubenswrapper[7387]: I0308 03:11:04.780687 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 03:11:04.794482 master-0 kubenswrapper[7387]: I0308 03:11:04.794452 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 03:11:04.814065 master-0 kubenswrapper[7387]: I0308 03:11:04.814014 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 03:11:04.829150 master-0 kubenswrapper[7387]: I0308 03:11:04.829099 7387 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 03:11:04.845122 master-0 kubenswrapper[7387]: I0308 03:11:04.845070 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 03:11:04.850813 master-0 kubenswrapper[7387]: I0308 03:11:04.850781 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:04.850989 master-0 kubenswrapper[7387]: I0308 03:11:04.850825 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.850989 master-0 kubenswrapper[7387]: I0308 03:11:04.850853 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.851335 master-0 kubenswrapper[7387]: I0308 03:11:04.851217 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.851335 master-0 kubenswrapper[7387]: I0308 03:11:04.851307 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.851535 master-0 kubenswrapper[7387]: I0308 03:11:04.851341 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.851535 master-0 kubenswrapper[7387]: I0308 03:11:04.851369 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:04.851535 master-0 kubenswrapper[7387]: I0308 03:11:04.851396 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.851535 master-0 kubenswrapper[7387]: I0308 03:11:04.851422 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851695 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851770 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851840 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851840 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851966 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.851993 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852040 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852067 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852161 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852226 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852253 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852275 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852293 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852299 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852430 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852475 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852499 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852522 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852433 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852548 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852575 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852605 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852634 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852636 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852707 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852660 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852791 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852816 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852843 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.852940 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853079 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853117 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853166 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853241 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853264 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853282 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853371 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.853355 master-0 kubenswrapper[7387]: I0308 03:11:04.853390 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.854778 master-0 kubenswrapper[7387]: I0308 03:11:04.853437 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.853491 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.858830 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.858933 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.858977 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859012 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859046 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859130 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859136 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859144 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859499 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859511 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859578 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859654 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859683 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859715 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859745 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859749 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859809 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859846 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859879 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.859942 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860037 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860071 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860092 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860149 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860214 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860242 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860364 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860387 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860432 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.860445 master-0 kubenswrapper[7387]: I0308 03:11:04.860479 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860522 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860616 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860699 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860739 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860784 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860833 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860885 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.860955 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861267 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861316 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861481 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861564 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861611 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861646 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861710 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861797 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861831 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.861967 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862005 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862042 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862086 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862129 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862171 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862191 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862215 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862270 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862315 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862352 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862481 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862626 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862757 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862789 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862840 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.862895 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863218 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863260 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863300 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863326 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863360 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863076 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863390 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863433 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863490 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863541 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863584 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.863603 master-0 kubenswrapper[7387]: I0308 03:11:04.863637 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863686 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863708 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863755 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863759 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863839 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863917 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.863969 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864013 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864059 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864103 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864232 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864285 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864314 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864329 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864342 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864515 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864612 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864658 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864711 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864738 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864749 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864763 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.864616 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865028 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865031 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865114 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865208 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865244 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865266 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865244 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865331 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865411 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865448 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865507 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865531 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865545 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865714 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865747 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865790 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.865889 master-0 kubenswrapper[7387]: I0308 03:11:04.865889 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:04.867526 master-0 kubenswrapper[7387]: I0308 03:11:04.865965 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:04.867526 master-0 kubenswrapper[7387]: I0308 03:11:04.866293 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:04.867526 master-0 kubenswrapper[7387]: I0308 03:11:04.866381 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:04.908583 master-0 kubenswrapper[7387]: I0308 03:11:04.908278 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:11:04.933934 master-0 kubenswrapper[7387]: I0308 03:11:04.933879 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:11:04.951733 master-0 kubenswrapper[7387]: I0308 03:11:04.951694 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:11:04.962477 master-0 kubenswrapper[7387]: W0308 03:11:04.962212 7387 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 08 03:11:04.962477 master-0 kubenswrapper[7387]: E0308 03:11:04.962257 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:11:04.966923 master-0 kubenswrapper[7387]: I0308 03:11:04.966859 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.967038 master-0 kubenswrapper[7387]: I0308 03:11:04.966992 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:04.967075 master-0 kubenswrapper[7387]: I0308 03:11:04.967056 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.967112 master-0 kubenswrapper[7387]: I0308 03:11:04.967061 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.967150 master-0 kubenswrapper[7387]: I0308 03:11:04.967103 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.967289 master-0 kubenswrapper[7387]: I0308 03:11:04.967192 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:04.967289 master-0 kubenswrapper[7387]: I0308 03:11:04.967233 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.967289 master-0 kubenswrapper[7387]: I0308 03:11:04.967204 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.967289 master-0 kubenswrapper[7387]: I0308 03:11:04.967261 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967412 master-0 kubenswrapper[7387]: I0308 03:11:04.967305 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967412 master-0 kubenswrapper[7387]: I0308 03:11:04.967362 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967412 master-0 kubenswrapper[7387]: I0308 03:11:04.967382 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967486 master-0 kubenswrapper[7387]: I0308 03:11:04.967403 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967486 master-0 kubenswrapper[7387]: E0308 03:11:04.967298 7387 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:04.967486 master-0 kubenswrapper[7387]: I0308 03:11:04.967448 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.967555 master-0 kubenswrapper[7387]: E0308 03:11:04.967369 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:04.967555 master-0 kubenswrapper[7387]: I0308 03:11:04.967514 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.967555 master-0 kubenswrapper[7387]: I0308 03:11:04.967545 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.967630 master-0 kubenswrapper[7387]: E0308 03:11:04.967502 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.467477496 +0000 UTC m=+1.861953197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:04.967661 master-0 kubenswrapper[7387]: E0308 03:11:04.967628 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.467595599 +0000 UTC m=+1.862071310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:04.967691 master-0 kubenswrapper[7387]: I0308 03:11:04.967660 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:04.967848 master-0 kubenswrapper[7387]: E0308 03:11:04.967781 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:04.967848 master-0 kubenswrapper[7387]: E0308 03:11:04.967831 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.467816235 +0000 UTC m=+1.862291916 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:04.969845 master-0 kubenswrapper[7387]: I0308 03:11:04.969803 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.969923 master-0 kubenswrapper[7387]: I0308 03:11:04.969874 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.969983 master-0 kubenswrapper[7387]: I0308 03:11:04.969931 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970043 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970083 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970045 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970101 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: E0308 03:11:04.970139 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970157 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: E0308 03:11:04.970165 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.470156656 +0000 UTC m=+1.864632337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970215 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970272 master-0 kubenswrapper[7387]: I0308 03:11:04.970237 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970315 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970347 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970377 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970421 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970454 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:04.970505 master-0 kubenswrapper[7387]: I0308 03:11:04.970487 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:04.970647 master-0 kubenswrapper[7387]: I0308 03:11:04.970517 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970647 master-0 kubenswrapper[7387]: I0308 03:11:04.970587 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:04.970647 master-0 kubenswrapper[7387]: I0308 03:11:04.970563 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:04.970722 master-0 kubenswrapper[7387]: I0308 03:11:04.970657 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.970722 master-0 kubenswrapper[7387]: I0308 03:11:04.970702 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:04.970774 master-0 kubenswrapper[7387]: I0308 03:11:04.970724 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970774 master-0 kubenswrapper[7387]: E0308 03:11:04.970729 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:04.970774 master-0 kubenswrapper[7387]: I0308 03:11:04.970753 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970843 master-0 kubenswrapper[7387]: E0308 03:11:04.970791 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.470772843 +0000 UTC m=+1.865248554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:04.970843 master-0 kubenswrapper[7387]: I0308 03:11:04.970802 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.970843 master-0 kubenswrapper[7387]: I0308 03:11:04.970830 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970937 master-0 kubenswrapper[7387]: I0308 03:11:04.970864 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.970937 master-0 kubenswrapper[7387]: E0308 03:11:04.970875 7387 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:04.970985 master-0 kubenswrapper[7387]: E0308 03:11:04.970948 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.470933767 +0000 UTC m=+1.865409478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971012 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971052 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971060 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971156 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: E0308 03:11:04.971197 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: E0308 03:11:04.971218 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.471211434 +0000 UTC m=+1.865687115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: E0308 03:11:04.971251 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: E0308 03:11:04.971270 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.471264425 +0000 UTC m=+1.865740106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971286 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971303 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971319 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971334 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971333 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971384 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971381 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971410 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.971454 master-0 kubenswrapper[7387]: I0308 03:11:04.971449 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971479 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971499 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971528 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971557 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971576 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971707 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971681 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971627 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971750 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971756 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971800 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971802 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971831 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971834 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971848 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971866 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971933 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.971947 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.971982 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972006 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.471987945 +0000 UTC m=+1.866463716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972051 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972145 7387 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972171 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972050 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972082 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972191 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.47217741 +0000 UTC m=+1.866653101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972234 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.472225641 +0000 UTC m=+1.866701332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972235 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972262 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972277 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.472265312 +0000 UTC m=+1.866741013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972300 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972299 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972343 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972349 7387 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972364 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972368 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: E0308 03:11:04.972381 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:05.472368635 +0000 UTC m=+1.866844326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972399 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972406 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972426 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972479 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:04.972676 master-0 kubenswrapper[7387]: I0308 03:11:04.972500 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:04.980926 master-0 kubenswrapper[7387]: E0308 03:11:04.980874 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:05.000502 master-0 kubenswrapper[7387]: E0308 03:11:05.000453 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:05.022671 master-0 kubenswrapper[7387]: E0308 03:11:05.022638 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:11:05.028965 master-0 kubenswrapper[7387]: I0308 03:11:05.028393 7387 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:11:05.046009 master-0 kubenswrapper[7387]: I0308 03:11:05.045965 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:05.068739 master-0 kubenswrapper[7387]: I0308 03:11:05.068707 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:11:05.090159 master-0 kubenswrapper[7387]: I0308 03:11:05.090086 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:05.114658 master-0 kubenswrapper[7387]: I0308 03:11:05.114616 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:05.125283 master-0 kubenswrapper[7387]: I0308 03:11:05.125247 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:05.146068 master-0 kubenswrapper[7387]: I0308 03:11:05.146024 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:05.166742 master-0 kubenswrapper[7387]: I0308 03:11:05.166656 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:05.187466 master-0 kubenswrapper[7387]: I0308 03:11:05.187409 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:05.207352 master-0 kubenswrapper[7387]: I0308 03:11:05.207315 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:11:05.225538 master-0 kubenswrapper[7387]: I0308 03:11:05.225487 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:05.246132 master-0 kubenswrapper[7387]: I0308 03:11:05.246088 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:11:05.269161 master-0 kubenswrapper[7387]: I0308 03:11:05.264820 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:11:05.284672 master-0 kubenswrapper[7387]: I0308 03:11:05.284620 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:05.290122 master-0 kubenswrapper[7387]: I0308 03:11:05.290073 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:11:05.308839 master-0 kubenswrapper[7387]: I0308 03:11:05.308494 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:05.329284 master-0 kubenswrapper[7387]: I0308 03:11:05.329243 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:05.346516 master-0 kubenswrapper[7387]: I0308 03:11:05.346408 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:11:05.368492 master-0 kubenswrapper[7387]: I0308 03:11:05.368443 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:05.385125 master-0 kubenswrapper[7387]: I0308 03:11:05.385079 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:05.412598 master-0 kubenswrapper[7387]: I0308 03:11:05.412518 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:05.440308 master-0 kubenswrapper[7387]: I0308 03:11:05.440219 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:11:05.452444 master-0 kubenswrapper[7387]: I0308 03:11:05.452395 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:05.465661 master-0 kubenswrapper[7387]: I0308 03:11:05.465619 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:11:05.489913 master-0 kubenswrapper[7387]: I0308 03:11:05.489450 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:05.490105 master-0 kubenswrapper[7387]: I0308 03:11:05.489949 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:05.490105 master-0 kubenswrapper[7387]: I0308 03:11:05.490005 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:05.490105 master-0 kubenswrapper[7387]: I0308 03:11:05.490051 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:05.490105 master-0 kubenswrapper[7387]: I0308 03:11:05.490094 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490120 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490146 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490170 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490202 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490232 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490254 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490276 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:05.490371 master-0 kubenswrapper[7387]: I0308 03:11:05.490299 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:05.490833 master-0 kubenswrapper[7387]: E0308 03:11:05.489636 7387 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:05.490833 master-0 kubenswrapper[7387]: E0308 03:11:05.490476 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.490458963 +0000 UTC m=+2.884934644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:05.491066 master-0 kubenswrapper[7387]: E0308 03:11:05.491023 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:05.491066 master-0 kubenswrapper[7387]: E0308 03:11:05.491063 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491053379 +0000 UTC m=+2.885529060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491105 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491127 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.49111983 +0000 UTC m=+2.885595521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491166 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491189 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491180022 +0000 UTC m=+2.885655713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491228 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:05.491250 master-0 kubenswrapper[7387]: E0308 03:11:05.491249 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491242144 +0000 UTC m=+2.885717835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491287 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491309 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491300965 +0000 UTC m=+2.885776646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491350 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491372 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491364747 +0000 UTC m=+2.885840438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491410 7387 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491431 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491424508 +0000 UTC m=+2.885900189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491467 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491487 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.49148057 +0000 UTC m=+2.885956251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491522 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491547 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491539731 +0000 UTC m=+2.886015412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491586 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491606 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491599973 +0000 UTC m=+2.886075664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:05.491635 master-0 kubenswrapper[7387]: E0308 03:11:05.491642 7387 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:05.492510 master-0 kubenswrapper[7387]: E0308 03:11:05.491701 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491692545 +0000 UTC m=+2.886168236 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:05.492510 master-0 kubenswrapper[7387]: E0308 03:11:05.490418 7387 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:05.492510 master-0 kubenswrapper[7387]: I0308 03:11:05.491710 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:05.492510 master-0 kubenswrapper[7387]: E0308 03:11:05.491728 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:06.491721526 +0000 UTC m=+2.886197217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:05.516871 master-0 kubenswrapper[7387]: I0308 03:11:05.515533 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:11:05.538932 master-0 kubenswrapper[7387]: I0308 03:11:05.537365 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:11:05.547720 master-0 kubenswrapper[7387]: I0308 03:11:05.547659 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:11:05.572332 master-0 kubenswrapper[7387]: I0308 03:11:05.572271 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:11:05.592477 master-0 kubenswrapper[7387]: I0308 03:11:05.592411 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:11:05.615374 master-0 kubenswrapper[7387]: I0308 03:11:05.615322 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:05.636561 master-0 kubenswrapper[7387]: I0308 03:11:05.636508 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:11:05.674754 master-0 kubenswrapper[7387]: I0308 03:11:05.674653 7387 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 03:11:05.682104 master-0 kubenswrapper[7387]: I0308 03:11:05.681656 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:05.842858 master-0 kubenswrapper[7387]: I0308 03:11:05.842440 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerStarted","Data":"60e1587c9cf4a4020a136e8642e8046f93d54430d105f0f097e182d865618fc6"} Mar 08 03:11:05.844409 master-0 kubenswrapper[7387]: I0308 03:11:05.844381 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044"} Mar 08 03:11:05.845922 master-0 kubenswrapper[7387]: I0308 03:11:05.845876 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c"} Mar 08 03:11:05.847833 master-0 kubenswrapper[7387]: I0308 03:11:05.847771 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0"} Mar 08 03:11:05.858300 master-0 kubenswrapper[7387]: I0308 03:11:05.850208 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"14837a65d7b37118db204275e04a4816d1b952e719453adc75bef1d793ecb182"} Mar 08 03:11:05.858300 master-0 kubenswrapper[7387]: I0308 03:11:05.852241 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="9e265e782cf76f9516c413e6f08b3615e452acde7fee6964c9dbc229a25efa6c" exitCode=0 Mar 08 03:11:05.858300 master-0 kubenswrapper[7387]: I0308 03:11:05.852312 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"9e265e782cf76f9516c413e6f08b3615e452acde7fee6964c9dbc229a25efa6c"} Mar 08 03:11:05.858300 master-0 kubenswrapper[7387]: I0308 03:11:05.855469 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerStarted","Data":"97e7e8e1d4c76162fdd36f707ca3e2faaa5f8b65907e58ff8edb116f08fe408b"} Mar 08 03:11:05.863129 master-0 kubenswrapper[7387]: I0308 03:11:05.863004 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"107e7aadbde6b65c42eb4756264c5507aea9b4627e7947de6f6b874799048d52"} Mar 08 03:11:05.941926 master-0 kubenswrapper[7387]: I0308 03:11:05.941001 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:06.128380 master-0 kubenswrapper[7387]: I0308 03:11:06.128331 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:06.153288 master-0 kubenswrapper[7387]: I0308 03:11:06.153247 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:06.449577 master-0 kubenswrapper[7387]: I0308 03:11:06.449157 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:06.507534 master-0 kubenswrapper[7387]: I0308 03:11:06.507394 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:06.507534 master-0 kubenswrapper[7387]: I0308 03:11:06.507465 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:06.507534 master-0 kubenswrapper[7387]: I0308 03:11:06.507507 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:06.507534 master-0 kubenswrapper[7387]: I0308 03:11:06.507537 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:06.507847 master-0 kubenswrapper[7387]: I0308 03:11:06.507569 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:06.507847 master-0 kubenswrapper[7387]: I0308 03:11:06.507592 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:06.507847 master-0 kubenswrapper[7387]: I0308 03:11:06.507629 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:06.507847 master-0 kubenswrapper[7387]: E0308 03:11:06.507811 7387 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:06.507975 master-0 kubenswrapper[7387]: E0308 03:11:06.507865 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.507848697 +0000 UTC m=+4.902324388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:06.508292 master-0 kubenswrapper[7387]: E0308 03:11:06.508263 7387 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:06.508342 master-0 kubenswrapper[7387]: E0308 03:11:06.508307 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508295879 +0000 UTC m=+4.902771560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:06.508378 master-0 kubenswrapper[7387]: E0308 03:11:06.508351 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:06.508411 master-0 kubenswrapper[7387]: E0308 03:11:06.508379 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508370261 +0000 UTC m=+4.902845952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:06.508459 master-0 kubenswrapper[7387]: E0308 03:11:06.508422 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:06.508459 master-0 kubenswrapper[7387]: E0308 03:11:06.508449 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508440963 +0000 UTC m=+4.902916644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:06.508530 master-0 kubenswrapper[7387]: E0308 03:11:06.508489 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:11:06.508530 master-0 kubenswrapper[7387]: E0308 03:11:06.508514 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508504944 +0000 UTC m=+4.902980625 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:11:06.508643 master-0 kubenswrapper[7387]: E0308 03:11:06.508553 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:06.508643 master-0 kubenswrapper[7387]: E0308 03:11:06.508578 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508570926 +0000 UTC m=+4.903046607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:06.508643 master-0 kubenswrapper[7387]: E0308 03:11:06.508619 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:06.508643 master-0 kubenswrapper[7387]: E0308 03:11:06.508642 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.508634868 +0000 UTC m=+4.903110549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:06.508798 master-0 kubenswrapper[7387]: E0308 03:11:06.508683 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:06.508798 master-0 kubenswrapper[7387]: E0308 03:11:06.508708 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.50869987 +0000 UTC m=+4.903175551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:06.508798 master-0 kubenswrapper[7387]: I0308 03:11:06.508729 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:06.508798 master-0 kubenswrapper[7387]: I0308 03:11:06.508760 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:06.508798 master-0 kubenswrapper[7387]: I0308 03:11:06.508789 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:06.509003 master-0 kubenswrapper[7387]: I0308 03:11:06.508810 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:06.509003 master-0 kubenswrapper[7387]: I0308 03:11:06.508839 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:06.509003 master-0 kubenswrapper[7387]: I0308 03:11:06.508863 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:06.509003 master-0 kubenswrapper[7387]: E0308 03:11:06.508988 7387 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:06.509111 master-0 kubenswrapper[7387]: E0308 03:11:06.509019 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.509009208 +0000 UTC m=+4.903484889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:06.509146 master-0 kubenswrapper[7387]: E0308 03:11:06.509111 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:06.509146 master-0 kubenswrapper[7387]: E0308 03:11:06.509141 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.509131861 +0000 UTC m=+4.903607542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:06.509219 master-0 kubenswrapper[7387]: E0308 03:11:06.509186 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:06.509219 master-0 kubenswrapper[7387]: E0308 03:11:06.509210 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.509202033 +0000 UTC m=+4.903677714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:06.509303 master-0 kubenswrapper[7387]: E0308 03:11:06.509253 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:06.509303 master-0 kubenswrapper[7387]: E0308 03:11:06.509276 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.509268615 +0000 UTC m=+4.903744296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:06.509378 master-0 kubenswrapper[7387]: E0308 03:11:06.509318 7387 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:06.509378 master-0 kubenswrapper[7387]: E0308 03:11:06.509342 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:08.509334576 +0000 UTC m=+4.903810257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:06.543738 master-0 kubenswrapper[7387]: I0308 03:11:06.543703 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-4lx8s"] Mar 08 03:11:06.743352 master-0 kubenswrapper[7387]: I0308 03:11:06.743305 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9"] Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743438 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743449 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743456 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743462 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743471 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743477 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743485 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743491 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743499 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743505 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743515 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743520 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743526 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kubecfg-setup" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743532 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kubecfg-setup" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743539 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743545 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: E0308 03:11:06.743553 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:11:06.743544 master-0 kubenswrapper[7387]: I0308 03:11:06.743560 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: E0308 03:11:06.743567 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743574 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: E0308 03:11:06.743581 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743588 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743638 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="northd" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743648 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="sbdb" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743655 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-acl-logging" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743661 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovn-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743668 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="nbdb" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743674 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-node" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743682 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1042c7-d08a-436c-a737-11573992faff" containerName="prober" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743689 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="ovnkube-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743695 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743702 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.743709 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c148bd-0a23-46f1-b54e-6e8fd18825d5" containerName="kubecfg-setup" Mar 08 03:11:06.744648 master-0 kubenswrapper[7387]: I0308 03:11:06.744195 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:11:06.757797 master-0 kubenswrapper[7387]: I0308 03:11:06.754388 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9"] Mar 08 03:11:06.814860 master-0 kubenswrapper[7387]: I0308 03:11:06.812721 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8xq\" (UniqueName: \"kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq\") pod \"csi-snapshot-controller-7577d6f48-kfmd9\" (UID: \"9fb588a9-6240-4513-8e4b-248eb43d3f06\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:11:06.886956 master-0 kubenswrapper[7387]: I0308 03:11:06.886888 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b"} Mar 08 03:11:06.897605 master-0 kubenswrapper[7387]: I0308 03:11:06.897486 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerStarted","Data":"8ab87543a0dca707df87062a9fccbc3d1ab6ac26bb171ba825afd502c52f108c"} Mar 08 03:11:06.909892 master-0 kubenswrapper[7387]: I0308 03:11:06.907482 7387 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="34bdcc1fe6a1c95721404567c2105c1c1fbc3c4b8fcdb91aba2994c23867fde9" exitCode=0 Mar 08 03:11:06.909892 master-0 kubenswrapper[7387]: I0308 03:11:06.907586 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerDied","Data":"34bdcc1fe6a1c95721404567c2105c1c1fbc3c4b8fcdb91aba2994c23867fde9"} Mar 08 03:11:06.913716 master-0 kubenswrapper[7387]: I0308 03:11:06.913367 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8xq\" (UniqueName: \"kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq\") pod \"csi-snapshot-controller-7577d6f48-kfmd9\" (UID: \"9fb588a9-6240-4513-8e4b-248eb43d3f06\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:11:06.919630 master-0 kubenswrapper[7387]: I0308 03:11:06.919427 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-4lx8s" event={"ID":"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774","Type":"ContainerStarted","Data":"417d08d19e981bed2425a24b9bf8b30abe91a7b89bec0c66b1687cff594da3db"} Mar 08 03:11:06.919630 master-0 kubenswrapper[7387]: I0308 03:11:06.919474 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-4lx8s" event={"ID":"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774","Type":"ContainerStarted","Data":"adabf6ff71c6a21ac7dd07e118092057910e34a7816affdbe09eba458256dabb"} Mar 08 03:11:06.919630 master-0 kubenswrapper[7387]: I0308 03:11:06.919535 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:06.919630 master-0 kubenswrapper[7387]: I0308 03:11:06.919558 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:06.944376 master-0 kubenswrapper[7387]: I0308 03:11:06.944084 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8xq\" (UniqueName: \"kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq\") pod \"csi-snapshot-controller-7577d6f48-kfmd9\" (UID: \"9fb588a9-6240-4513-8e4b-248eb43d3f06\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:11:07.073515 master-0 kubenswrapper[7387]: I0308 03:11:07.073468 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:11:07.252403 master-0 kubenswrapper[7387]: I0308 03:11:07.252341 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9"] Mar 08 03:11:07.942279 master-0 kubenswrapper[7387]: I0308 03:11:07.941660 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"f7b4207e156e5bf2edc3fece9e2843a82ae15105a8e6a5ed4d557ebec8b1b2e1"} Mar 08 03:11:07.942279 master-0 kubenswrapper[7387]: I0308 03:11:07.941686 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:07.982581 master-0 kubenswrapper[7387]: I0308 03:11:07.981196 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6"] Mar 08 03:11:07.984937 master-0 kubenswrapper[7387]: I0308 03:11:07.984749 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:11:07.989792 master-0 kubenswrapper[7387]: I0308 03:11:07.989737 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 03:11:07.991755 master-0 kubenswrapper[7387]: I0308 03:11:07.989961 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 03:11:07.996735 master-0 kubenswrapper[7387]: I0308 03:11:07.996556 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6"] Mar 08 03:11:08.132601 master-0 kubenswrapper[7387]: I0308 03:11:08.132541 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tj8l\" (UniqueName: \"kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l\") pod \"migrator-57ccdf9b5-rrfg6\" (UID: \"3c336192-80ee-4d53-a4ec-710cba95fac6\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:11:08.235057 master-0 kubenswrapper[7387]: I0308 03:11:08.233745 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tj8l\" (UniqueName: \"kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l\") pod \"migrator-57ccdf9b5-rrfg6\" (UID: \"3c336192-80ee-4d53-a4ec-710cba95fac6\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:11:08.255919 master-0 kubenswrapper[7387]: I0308 03:11:08.255871 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tj8l\" (UniqueName: \"kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l\") pod \"migrator-57ccdf9b5-rrfg6\" (UID: \"3c336192-80ee-4d53-a4ec-710cba95fac6\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:11:08.349312 master-0 kubenswrapper[7387]: I0308 03:11:08.349110 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:11:08.540855 master-0 kubenswrapper[7387]: I0308 03:11:08.540767 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:08.540855 master-0 kubenswrapper[7387]: I0308 03:11:08.540829 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541091 7387 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541168 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541181 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541163817 +0000 UTC m=+8.935639498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541357 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541476 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541428314 +0000 UTC m=+8.935904005 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541523 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541579 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541607 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541637 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541665 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541673 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541697 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541715 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541706831 +0000 UTC m=+8.936182512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "node-tuning-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541734 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541765 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541754 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541796 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541784843 +0000 UTC m=+8.936260544 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541814 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: I0308 03:11:08.541846 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541818 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541979 7387 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541983 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541973288 +0000 UTC m=+8.936448989 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541850 7387 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542001 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.541994459 +0000 UTC m=+8.936470140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542032 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert podName:103158c5-c99f-4224-bf5a-e23b1aaf9172 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542019979 +0000 UTC m=+8.936495660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-c4zs4" (UID: "103158c5-c99f-4224-bf5a-e23b1aaf9172") : secret "performance-addon-operator-webhook-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541896 7387 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542061 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.54205465 +0000 UTC m=+8.936530331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.541934 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542084 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542078941 +0000 UTC m=+8.936554622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542087 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542168 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542147833 +0000 UTC m=+8.936623514 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542174 7387 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542207 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542218 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542211915 +0000 UTC m=+8.936687596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542232 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542241 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542231285 +0000 UTC m=+8.936706976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:08.542688 master-0 kubenswrapper[7387]: E0308 03:11:08.542264 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.542255376 +0000 UTC m=+8.936731047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:08.676119 master-0 kubenswrapper[7387]: I0308 03:11:08.676041 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:08.679844 master-0 kubenswrapper[7387]: I0308 03:11:08.679821 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:08.950387 master-0 kubenswrapper[7387]: I0308 03:11:08.950274 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 08 03:11:09.044140 master-0 kubenswrapper[7387]: I0308 03:11:09.043228 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-jd48j"] Mar 08 03:11:09.044140 master-0 kubenswrapper[7387]: I0308 03:11:09.043853 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.046424 master-0 kubenswrapper[7387]: I0308 03:11:09.046396 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:11:09.046594 master-0 kubenswrapper[7387]: I0308 03:11:09.046547 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:11:09.049296 master-0 kubenswrapper[7387]: I0308 03:11:09.048330 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:11:09.049709 master-0 kubenswrapper[7387]: I0308 03:11:09.049675 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:11:09.050465 master-0 kubenswrapper[7387]: I0308 03:11:09.049977 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:09.050646 master-0 kubenswrapper[7387]: I0308 03:11:09.050617 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:09.055581 master-0 kubenswrapper[7387]: I0308 03:11:09.055538 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-jd48j"] Mar 08 03:11:09.149839 master-0 kubenswrapper[7387]: I0308 03:11:09.149783 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.150111 master-0 kubenswrapper[7387]: I0308 03:11:09.149985 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.150111 master-0 kubenswrapper[7387]: I0308 03:11:09.150034 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6jmh\" (UniqueName: \"kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.150229 master-0 kubenswrapper[7387]: I0308 03:11:09.150206 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.150346 master-0 kubenswrapper[7387]: I0308 03:11:09.150317 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.213651 master-0 kubenswrapper[7387]: I0308 03:11:09.213484 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:09.213804 master-0 kubenswrapper[7387]: I0308 03:11:09.213680 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:09.213804 master-0 kubenswrapper[7387]: I0308 03:11:09.213692 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:09.218862 master-0 kubenswrapper[7387]: I0308 03:11:09.218819 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-jnpl5"] Mar 08 03:11:09.219362 master-0 kubenswrapper[7387]: I0308 03:11:09.219340 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.223390 master-0 kubenswrapper[7387]: I0308 03:11:09.222740 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 03:11:09.223390 master-0 kubenswrapper[7387]: I0308 03:11:09.222795 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 03:11:09.223390 master-0 kubenswrapper[7387]: I0308 03:11:09.222949 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 03:11:09.223390 master-0 kubenswrapper[7387]: I0308 03:11:09.223002 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 03:11:09.229067 master-0 kubenswrapper[7387]: I0308 03:11:09.229027 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-jnpl5"] Mar 08 03:11:09.251438 master-0 kubenswrapper[7387]: I0308 03:11:09.251204 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:09.251599 master-0 kubenswrapper[7387]: I0308 03:11:09.251508 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.251721 master-0 kubenswrapper[7387]: I0308 03:11:09.251619 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.251721 master-0 kubenswrapper[7387]: I0308 03:11:09.251685 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.251814 master-0 kubenswrapper[7387]: E0308 03:11:09.251789 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 08 03:11:09.251860 master-0 kubenswrapper[7387]: E0308 03:11:09.251808 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 08 03:11:09.251928 master-0 kubenswrapper[7387]: E0308 03:11:09.251863 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.751841946 +0000 UTC m=+6.146317627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "config" not found Mar 08 03:11:09.251928 master-0 kubenswrapper[7387]: I0308 03:11:09.251894 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.252020 master-0 kubenswrapper[7387]: E0308 03:11:09.251950 7387 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:09.252020 master-0 kubenswrapper[7387]: E0308 03:11:09.251993 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.751980779 +0000 UTC m=+6.146456460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : secret "serving-cert" not found Mar 08 03:11:09.252020 master-0 kubenswrapper[7387]: I0308 03:11:09.251952 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6jmh\" (UniqueName: \"kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.252214 master-0 kubenswrapper[7387]: E0308 03:11:09.252189 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:09.252214 master-0 kubenswrapper[7387]: E0308 03:11:09.252200 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.752186715 +0000 UTC m=+6.146662396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "openshift-global-ca" not found Mar 08 03:11:09.252302 master-0 kubenswrapper[7387]: E0308 03:11:09.252226 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:09.752216405 +0000 UTC m=+6.146692086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "client-ca" not found Mar 08 03:11:09.278201 master-0 kubenswrapper[7387]: I0308 03:11:09.278163 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6jmh\" (UniqueName: \"kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.354360 master-0 kubenswrapper[7387]: I0308 03:11:09.354314 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm9tk\" (UniqueName: \"kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.354360 master-0 kubenswrapper[7387]: I0308 03:11:09.354360 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.354610 master-0 kubenswrapper[7387]: I0308 03:11:09.354386 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.455028 master-0 kubenswrapper[7387]: I0308 03:11:09.454941 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9tk\" (UniqueName: \"kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.455028 master-0 kubenswrapper[7387]: I0308 03:11:09.455027 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.455354 master-0 kubenswrapper[7387]: I0308 03:11:09.455091 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.455939 master-0 kubenswrapper[7387]: I0308 03:11:09.455867 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.460515 master-0 kubenswrapper[7387]: I0308 03:11:09.460436 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.471532 master-0 kubenswrapper[7387]: I0308 03:11:09.471426 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9tk\" (UniqueName: \"kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.539941 master-0 kubenswrapper[7387]: I0308 03:11:09.539466 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:11:09.614399 master-0 kubenswrapper[7387]: I0308 03:11:09.614326 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: I0308 03:11:09.757640 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: I0308 03:11:09.757694 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: I0308 03:11:09.757745 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: E0308 03:11:09.757760 7387 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: I0308 03:11:09.757816 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:09.757917 master-0 kubenswrapper[7387]: E0308 03:11:09.757839 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.757821496 +0000 UTC m=+7.152297177 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : secret "serving-cert" not found Mar 08 03:11:09.758217 master-0 kubenswrapper[7387]: E0308 03:11:09.758039 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:09.758217 master-0 kubenswrapper[7387]: E0308 03:11:09.758090 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 08 03:11:09.758217 master-0 kubenswrapper[7387]: E0308 03:11:09.758162 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.758129914 +0000 UTC m=+7.152605645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "client-ca" not found Mar 08 03:11:09.758217 master-0 kubenswrapper[7387]: E0308 03:11:09.758184 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 08 03:11:09.758217 master-0 kubenswrapper[7387]: E0308 03:11:09.758218 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.758180386 +0000 UTC m=+7.152656187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "openshift-global-ca" not found Mar 08 03:11:09.758359 master-0 kubenswrapper[7387]: E0308 03:11:09.758248 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.758233477 +0000 UTC m=+7.152709308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "config" not found Mar 08 03:11:09.965051 master-0 kubenswrapper[7387]: I0308 03:11:09.964985 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-jd48j"] Mar 08 03:11:09.965749 master-0 kubenswrapper[7387]: I0308 03:11:09.965320 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:09.965749 master-0 kubenswrapper[7387]: E0308 03:11:09.965364 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" podUID="e99dd46c-019a-4bd9-a4a2-c037ac5c29f2" Mar 08 03:11:09.965749 master-0 kubenswrapper[7387]: I0308 03:11:09.965424 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-fpxrc" event={"ID":"aadf7b67-db33-4392-81f5-1b93eef54545","Type":"ContainerStarted","Data":"c8851182ae0965b4995714a184b0bb1ee2df2086516cf57cdb097e64289a7e64"} Mar 08 03:11:09.968172 master-0 kubenswrapper[7387]: I0308 03:11:09.968145 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9"] Mar 08 03:11:09.971196 master-0 kubenswrapper[7387]: I0308 03:11:09.971182 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:09.976280 master-0 kubenswrapper[7387]: I0308 03:11:09.976245 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:09.977242 master-0 kubenswrapper[7387]: I0308 03:11:09.977211 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:11:09.977242 master-0 kubenswrapper[7387]: I0308 03:11:09.977218 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:11:09.977426 master-0 kubenswrapper[7387]: I0308 03:11:09.977398 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:11:09.977563 master-0 kubenswrapper[7387]: I0308 03:11:09.977522 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:09.977634 master-0 kubenswrapper[7387]: I0308 03:11:09.977528 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9"] Mar 08 03:11:10.063302 master-0 kubenswrapper[7387]: I0308 03:11:10.062185 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65dtl\" (UniqueName: \"kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.063302 master-0 kubenswrapper[7387]: I0308 03:11:10.062586 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.063302 master-0 kubenswrapper[7387]: I0308 03:11:10.062688 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.063302 master-0 kubenswrapper[7387]: I0308 03:11:10.062761 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.163874 master-0 kubenswrapper[7387]: I0308 03:11:10.163834 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.164255 master-0 kubenswrapper[7387]: I0308 03:11:10.164230 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.164429 master-0 kubenswrapper[7387]: I0308 03:11:10.164415 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65dtl\" (UniqueName: \"kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.164524 master-0 kubenswrapper[7387]: I0308 03:11:10.164512 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.165338 master-0 kubenswrapper[7387]: I0308 03:11:10.165316 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.165582 master-0 kubenswrapper[7387]: E0308 03:11:10.165566 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:10.165708 master-0 kubenswrapper[7387]: E0308 03:11:10.165697 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.665682894 +0000 UTC m=+7.060158575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:10.166059 master-0 kubenswrapper[7387]: E0308 03:11:10.166042 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:10.166170 master-0 kubenswrapper[7387]: E0308 03:11:10.166158 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:10.666149386 +0000 UTC m=+7.060625067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:10.191128 master-0 kubenswrapper[7387]: I0308 03:11:10.191099 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65dtl\" (UniqueName: \"kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.663127 master-0 kubenswrapper[7387]: I0308 03:11:10.662529 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-jnpl5"] Mar 08 03:11:10.673264 master-0 kubenswrapper[7387]: I0308 03:11:10.672688 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6"] Mar 08 03:11:10.673264 master-0 kubenswrapper[7387]: I0308 03:11:10.672731 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.673264 master-0 kubenswrapper[7387]: I0308 03:11:10.672782 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:10.673264 master-0 kubenswrapper[7387]: E0308 03:11:10.672865 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:10.673264 master-0 kubenswrapper[7387]: E0308 03:11:10.672934 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:11.672894377 +0000 UTC m=+8.067370058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:10.673534 master-0 kubenswrapper[7387]: E0308 03:11:10.673277 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:10.673534 master-0 kubenswrapper[7387]: E0308 03:11:10.673303 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:11.673295827 +0000 UTC m=+8.067771508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:10.681009 master-0 kubenswrapper[7387]: W0308 03:11:10.680872 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c336192_80ee_4d53_a4ec_710cba95fac6.slice/crio-d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e WatchSource:0}: Error finding container d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e: Status 404 returned error can't find the container with id d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.773761 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.776604 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.776695 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.776730 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: E0308 03:11:10.776865 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: E0308 03:11:10.776933 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.776918635 +0000 UTC m=+9.171394316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : configmap "client-ca" not found Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: E0308 03:11:10.777547 7387 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: E0308 03:11:10.777580 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert podName:e99dd46c-019a-4bd9-a4a2-c037ac5c29f2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:12.777569842 +0000 UTC m=+9.172045533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert") pod "controller-manager-6f7fd6c796-jd48j" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2") : secret "serving-cert" not found Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.776462 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.780891 master-0 kubenswrapper[7387]: I0308 03:11:10.778838 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-jd48j\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.969419 master-0 kubenswrapper[7387]: I0308 03:11:10.969265 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" event={"ID":"3c336192-80ee-4d53-a4ec-710cba95fac6","Type":"ContainerStarted","Data":"d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e"} Mar 08 03:11:10.970759 master-0 kubenswrapper[7387]: I0308 03:11:10.970706 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerStarted","Data":"af65ea05bf6d79301d65510b68a66fb2935b708f2ae46cc68e36995843b0c55c"} Mar 08 03:11:10.970821 master-0 kubenswrapper[7387]: I0308 03:11:10.970762 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerStarted","Data":"a71f01482badfd599ecfabb1babd6c7d23f18015321cbb4541d2c57b236ce1e9"} Mar 08 03:11:10.973128 master-0 kubenswrapper[7387]: I0308 03:11:10.973077 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22"} Mar 08 03:11:10.976835 master-0 kubenswrapper[7387]: I0308 03:11:10.976784 7387 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="4b47ae711314d73fcc77146d0c62592ca40a700fb32ad8d3e1174722f8823659" exitCode=0 Mar 08 03:11:10.976925 master-0 kubenswrapper[7387]: I0308 03:11:10.976866 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerDied","Data":"4b47ae711314d73fcc77146d0c62592ca40a700fb32ad8d3e1174722f8823659"} Mar 08 03:11:10.980778 master-0 kubenswrapper[7387]: I0308 03:11:10.980754 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:10.981111 master-0 kubenswrapper[7387]: I0308 03:11:10.981078 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"444ccfffc52a5a8ffccee9bac8ab1880482309c7e1b3f7a74c0d255becf8fee0"} Mar 08 03:11:10.981519 master-0 kubenswrapper[7387]: I0308 03:11:10.981492 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:10.986670 master-0 kubenswrapper[7387]: I0308 03:11:10.986637 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:11.022138 master-0 kubenswrapper[7387]: I0308 03:11:11.022063 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" podStartSLOduration=2.022036618 podStartE2EDuration="2.022036618s" podCreationTimestamp="2026-03-08 03:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:10.989578824 +0000 UTC m=+7.384054505" watchObservedRunningTime="2026-03-08 03:11:11.022036618 +0000 UTC m=+7.416512299" Mar 08 03:11:11.022355 master-0 kubenswrapper[7387]: I0308 03:11:11.022303 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podStartSLOduration=1.900383017 podStartE2EDuration="5.022297275s" podCreationTimestamp="2026-03-08 03:11:06 +0000 UTC" firstStartedPulling="2026-03-08 03:11:07.270959697 +0000 UTC m=+3.665435378" lastFinishedPulling="2026-03-08 03:11:10.392873915 +0000 UTC m=+6.787349636" observedRunningTime="2026-03-08 03:11:11.020896428 +0000 UTC m=+7.415372129" watchObservedRunningTime="2026-03-08 03:11:11.022297275 +0000 UTC m=+7.416772956" Mar 08 03:11:11.079795 master-0 kubenswrapper[7387]: I0308 03:11:11.079754 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") pod \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " Mar 08 03:11:11.080153 master-0 kubenswrapper[7387]: I0308 03:11:11.080131 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6jmh\" (UniqueName: \"kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh\") pod \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " Mar 08 03:11:11.080316 master-0 kubenswrapper[7387]: I0308 03:11:11.080297 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") pod \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\" (UID: \"e99dd46c-019a-4bd9-a4a2-c037ac5c29f2\") " Mar 08 03:11:11.080638 master-0 kubenswrapper[7387]: I0308 03:11:11.080573 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:11.080967 master-0 kubenswrapper[7387]: I0308 03:11:11.080920 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config" (OuterVolumeSpecName: "config") pod "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:11.082159 master-0 kubenswrapper[7387]: I0308 03:11:11.082138 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:11.082293 master-0 kubenswrapper[7387]: I0308 03:11:11.082277 7387 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:11.088186 master-0 kubenswrapper[7387]: I0308 03:11:11.088128 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh" (OuterVolumeSpecName: "kube-api-access-n6jmh") pod "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2" (UID: "e99dd46c-019a-4bd9-a4a2-c037ac5c29f2"). InnerVolumeSpecName "kube-api-access-n6jmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:11.183266 master-0 kubenswrapper[7387]: I0308 03:11:11.183207 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6jmh\" (UniqueName: \"kubernetes.io/projected/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-kube-api-access-n6jmh\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:11.688574 master-0 kubenswrapper[7387]: I0308 03:11:11.688449 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:11.688574 master-0 kubenswrapper[7387]: I0308 03:11:11.688508 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:11.688847 master-0 kubenswrapper[7387]: E0308 03:11:11.688655 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:11.688847 master-0 kubenswrapper[7387]: E0308 03:11:11.688731 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:13.688712979 +0000 UTC m=+10.083188661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:11.688970 master-0 kubenswrapper[7387]: E0308 03:11:11.688934 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:11.689003 master-0 kubenswrapper[7387]: E0308 03:11:11.688984 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:13.688969496 +0000 UTC m=+10.083445177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:11.993997 master-0 kubenswrapper[7387]: I0308 03:11:11.989751 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-jd48j" Mar 08 03:11:12.084937 master-0 kubenswrapper[7387]: I0308 03:11:12.082048 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-jd48j"] Mar 08 03:11:12.094482 master-0 kubenswrapper[7387]: I0308 03:11:12.094423 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-jd48j"] Mar 08 03:11:12.202575 master-0 kubenswrapper[7387]: I0308 03:11:12.202525 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:12.202575 master-0 kubenswrapper[7387]: I0308 03:11:12.202555 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.605980 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606042 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606075 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606093 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606114 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606133 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606155 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606175 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606190 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606210 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606235 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606265 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: I0308 03:11:12.606284 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:12.606507 master-0 kubenswrapper[7387]: E0308 03:11:12.606390 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607241 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.60722373 +0000 UTC m=+17.001699411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607268 7387 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607319 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert podName:1f7c9726-057b-4c5c-8a03-9bc407dedb9b nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607303752 +0000 UTC m=+17.001779433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert") pod "cluster-version-operator-745944c6b7-rs4ld" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b") : secret "cluster-version-operator-serving-cert" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607446 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607500 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:12.607561 master-0 kubenswrapper[7387]: E0308 03:11:12.607529 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607511097 +0000 UTC m=+17.001986778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607572 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607584 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607559568 +0000 UTC m=+17.002035249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607621 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.60761334 +0000 UTC m=+17.002089021 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607664 7387 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607697 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls podName:d82cf0db-0891-482d-856b-1675843042dd nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607688902 +0000 UTC m=+17.002164683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-brfnq" (UID: "d82cf0db-0891-482d-856b-1675843042dd") : secret "image-registry-operator-tls" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607717 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:12.607776 master-0 kubenswrapper[7387]: E0308 03:11:12.607742 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607735343 +0000 UTC m=+17.002211024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607803 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607822 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607816295 +0000 UTC m=+17.002291976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607876 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607896 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607890267 +0000 UTC m=+17.002366078 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607958 7387 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.607977 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls podName:197afe92-5912-4e90-a477-e3abe001bbc7 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.607971069 +0000 UTC m=+17.002446750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls") pod "ingress-operator-677db989d6-4bpl8" (UID: "197afe92-5912-4e90-a477-e3abe001bbc7") : secret "metrics-tls" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.608276 7387 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 08 03:11:12.609803 master-0 kubenswrapper[7387]: E0308 03:11:12.608304 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls podName:ef16d7ae-66aa-45d4-b1a6-1327738a46bb nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.608296218 +0000 UTC m=+17.002772009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls") pod "dns-operator-589895fbb7-9mhwc" (UID: "ef16d7ae-66aa-45d4-b1a6-1327738a46bb") : secret "metrics-tls" not found Mar 08 03:11:12.612107 master-0 kubenswrapper[7387]: I0308 03:11:12.612059 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:12.612393 master-0 kubenswrapper[7387]: I0308 03:11:12.612368 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:12.839941 master-0 kubenswrapper[7387]: I0308 03:11:12.839412 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:11:12.994771 master-0 kubenswrapper[7387]: I0308 03:11:12.994228 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" event={"ID":"3c336192-80ee-4d53-a4ec-710cba95fac6","Type":"ContainerStarted","Data":"2a1913e320eacccdf3104788f4b11c0aac21e2cc56eb52f171ed07f31bf2b4c3"} Mar 08 03:11:13.135308 master-0 kubenswrapper[7387]: I0308 03:11:13.134516 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb"] Mar 08 03:11:13.135308 master-0 kubenswrapper[7387]: I0308 03:11:13.134984 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.137247 master-0 kubenswrapper[7387]: I0308 03:11:13.136536 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:11:13.137247 master-0 kubenswrapper[7387]: I0308 03:11:13.137005 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:11:13.137247 master-0 kubenswrapper[7387]: I0308 03:11:13.137227 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:13.137404 master-0 kubenswrapper[7387]: I0308 03:11:13.137262 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:13.137404 master-0 kubenswrapper[7387]: I0308 03:11:13.137306 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:11:13.140791 master-0 kubenswrapper[7387]: I0308 03:11:13.140757 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:11:13.160700 master-0 kubenswrapper[7387]: I0308 03:11:13.160626 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb"] Mar 08 03:11:13.206925 master-0 kubenswrapper[7387]: I0308 03:11:13.206446 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4"] Mar 08 03:11:13.215943 master-0 kubenswrapper[7387]: W0308 03:11:13.214090 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103158c5_c99f_4224_bf5a_e23b1aaf9172.slice/crio-1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260 WatchSource:0}: Error finding container 1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260: Status 404 returned error can't find the container with id 1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260 Mar 08 03:11:13.319380 master-0 kubenswrapper[7387]: I0308 03:11:13.319306 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:13.327263 master-0 kubenswrapper[7387]: I0308 03:11:13.327199 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.327326 master-0 kubenswrapper[7387]: I0308 03:11:13.327270 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.327364 master-0 kubenswrapper[7387]: I0308 03:11:13.327338 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.327443 master-0 kubenswrapper[7387]: I0308 03:11:13.327410 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbr6g\" (UniqueName: \"kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.327514 master-0 kubenswrapper[7387]: I0308 03:11:13.327488 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.428598 master-0 kubenswrapper[7387]: I0308 03:11:13.428490 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.428828 master-0 kubenswrapper[7387]: E0308 03:11:13.428675 7387 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:13.428924 master-0 kubenswrapper[7387]: I0308 03:11:13.428804 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.428988 master-0 kubenswrapper[7387]: E0308 03:11:13.428943 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:13.92885107 +0000 UTC m=+10.323326751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : secret "serving-cert" not found Mar 08 03:11:13.429509 master-0 kubenswrapper[7387]: I0308 03:11:13.429446 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.429582 master-0 kubenswrapper[7387]: E0308 03:11:13.429549 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:13.429624 master-0 kubenswrapper[7387]: E0308 03:11:13.429608 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:13.92959614 +0000 UTC m=+10.324071901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:13.429677 master-0 kubenswrapper[7387]: I0308 03:11:13.429655 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbr6g\" (UniqueName: \"kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.429814 master-0 kubenswrapper[7387]: I0308 03:11:13.429779 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.430951 master-0 kubenswrapper[7387]: I0308 03:11:13.430936 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.431523 master-0 kubenswrapper[7387]: I0308 03:11:13.431477 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.448991 master-0 kubenswrapper[7387]: I0308 03:11:13.448896 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbr6g\" (UniqueName: \"kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.733185 master-0 kubenswrapper[7387]: I0308 03:11:13.732474 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:13.733185 master-0 kubenswrapper[7387]: I0308 03:11:13.732645 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:13.733185 master-0 kubenswrapper[7387]: E0308 03:11:13.732750 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:13.733185 master-0 kubenswrapper[7387]: E0308 03:11:13.732796 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:17.732782342 +0000 UTC m=+14.127258013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:13.733185 master-0 kubenswrapper[7387]: E0308 03:11:13.733144 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:13.733452 master-0 kubenswrapper[7387]: E0308 03:11:13.733220 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:17.733202443 +0000 UTC m=+14.127678124 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:13.738766 master-0 kubenswrapper[7387]: I0308 03:11:13.738712 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:13.744476 master-0 kubenswrapper[7387]: I0308 03:11:13.744248 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:13.768402 master-0 kubenswrapper[7387]: I0308 03:11:13.768331 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99dd46c-019a-4bd9-a4a2-c037ac5c29f2" path="/var/lib/kubelet/pods/e99dd46c-019a-4bd9-a4a2-c037ac5c29f2/volumes" Mar 08 03:11:13.935044 master-0 kubenswrapper[7387]: I0308 03:11:13.934993 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.935226 master-0 kubenswrapper[7387]: I0308 03:11:13.935064 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:13.935226 master-0 kubenswrapper[7387]: E0308 03:11:13.935202 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:13.935284 master-0 kubenswrapper[7387]: E0308 03:11:13.935265 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:14.935247632 +0000 UTC m=+11.329723313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:13.941589 master-0 kubenswrapper[7387]: I0308 03:11:13.941530 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:14.003231 master-0 kubenswrapper[7387]: I0308 03:11:14.002964 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerStarted","Data":"1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260"} Mar 08 03:11:14.005013 master-0 kubenswrapper[7387]: I0308 03:11:14.004984 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" event={"ID":"3c336192-80ee-4d53-a4ec-710cba95fac6","Type":"ContainerStarted","Data":"cc79bcf776c4fef6e9535d8f76ae864a55e12dcaeaee8c586d0d5d94d85e908e"} Mar 08 03:11:14.013375 master-0 kubenswrapper[7387]: I0308 03:11:14.012322 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:14.024002 master-0 kubenswrapper[7387]: I0308 03:11:14.023944 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" podStartSLOduration=4.896887291 podStartE2EDuration="7.023926567s" podCreationTimestamp="2026-03-08 03:11:07 +0000 UTC" firstStartedPulling="2026-03-08 03:11:10.68820839 +0000 UTC m=+7.082684071" lastFinishedPulling="2026-03-08 03:11:12.815247666 +0000 UTC m=+9.209723347" observedRunningTime="2026-03-08 03:11:14.022565121 +0000 UTC m=+10.417040802" watchObservedRunningTime="2026-03-08 03:11:14.023926567 +0000 UTC m=+10.418402248" Mar 08 03:11:14.389174 master-0 kubenswrapper[7387]: I0308 03:11:14.389112 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:14.647469 master-0 kubenswrapper[7387]: I0308 03:11:14.647050 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:14.949258 master-0 kubenswrapper[7387]: I0308 03:11:14.949218 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:14.949650 master-0 kubenswrapper[7387]: E0308 03:11:14.949511 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:14.949842 master-0 kubenswrapper[7387]: E0308 03:11:14.949823 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:16.949799122 +0000 UTC m=+13.344274843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:15.017823 master-0 kubenswrapper[7387]: I0308 03:11:15.017759 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:11:16.016073 master-0 kubenswrapper[7387]: I0308 03:11:16.016002 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerStarted","Data":"768949e4d93a435cb37be6fb573bf2225669a3e078f13a7117be88e9456f605b"} Mar 08 03:11:16.979609 master-0 kubenswrapper[7387]: I0308 03:11:16.979502 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:16.980737 master-0 kubenswrapper[7387]: E0308 03:11:16.979673 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:16.980737 master-0 kubenswrapper[7387]: E0308 03:11:16.979767 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.979741322 +0000 UTC m=+17.374217033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:17.008009 master-0 kubenswrapper[7387]: I0308 03:11:17.007894 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:11:17.008219 master-0 kubenswrapper[7387]: I0308 03:11:17.008030 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:11:17.022890 master-0 kubenswrapper[7387]: I0308 03:11:17.022809 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="444ccfffc52a5a8ffccee9bac8ab1880482309c7e1b3f7a74c0d255becf8fee0" exitCode=0 Mar 08 03:11:17.023608 master-0 kubenswrapper[7387]: I0308 03:11:17.023563 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"444ccfffc52a5a8ffccee9bac8ab1880482309c7e1b3f7a74c0d255becf8fee0"} Mar 08 03:11:17.024063 master-0 kubenswrapper[7387]: I0308 03:11:17.024015 7387 scope.go:117] "RemoveContainer" containerID="444ccfffc52a5a8ffccee9bac8ab1880482309c7e1b3f7a74c0d255becf8fee0" Mar 08 03:11:17.380922 master-0 kubenswrapper[7387]: I0308 03:11:17.380847 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:17.790203 master-0 kubenswrapper[7387]: I0308 03:11:17.790151 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:17.790465 master-0 kubenswrapper[7387]: I0308 03:11:17.790223 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:17.790465 master-0 kubenswrapper[7387]: E0308 03:11:17.790337 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:17.790465 master-0 kubenswrapper[7387]: E0308 03:11:17.790405 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:25.790382723 +0000 UTC m=+22.184858424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:17.791046 master-0 kubenswrapper[7387]: E0308 03:11:17.790842 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:17.791046 master-0 kubenswrapper[7387]: E0308 03:11:17.791020 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:25.790973528 +0000 UTC m=+22.185449229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:18.029639 master-0 kubenswrapper[7387]: I0308 03:11:18.028883 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"122d82dfb1bfd9c05bd161084f45586e27293d3320c13ab8454659ed4cdae5c0"} Mar 08 03:11:18.029639 master-0 kubenswrapper[7387]: I0308 03:11:18.029603 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:18.464341 master-0 kubenswrapper[7387]: I0308 03:11:18.461817 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5f6db9bdd8-5hlgc"] Mar 08 03:11:18.464341 master-0 kubenswrapper[7387]: I0308 03:11:18.462617 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.464341 master-0 kubenswrapper[7387]: I0308 03:11:18.463769 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5f6db9bdd8-5hlgc"] Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.467511 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.467698 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.467805 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.467920 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.468037 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.468167 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.468321 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 03:11:18.469928 master-0 kubenswrapper[7387]: I0308 03:11:18.468418 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 03:11:18.484132 master-0 kubenswrapper[7387]: I0308 03:11:18.483816 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 03:11:18.485644 master-0 kubenswrapper[7387]: I0308 03:11:18.485603 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507152 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507192 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507216 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507236 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bpgc\" (UniqueName: \"kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507263 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507340 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507362 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507401 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507446 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507466 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.507919 master-0 kubenswrapper[7387]: I0308 03:11:18.507694 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609041 master-0 kubenswrapper[7387]: I0308 03:11:18.608997 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609041 master-0 kubenswrapper[7387]: I0308 03:11:18.609036 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609099 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609137 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609152 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609165 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bpgc\" (UniqueName: \"kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609180 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609195 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609209 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609235 master-0 kubenswrapper[7387]: I0308 03:11:18.609223 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609434 master-0 kubenswrapper[7387]: I0308 03:11:18.609254 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609434 master-0 kubenswrapper[7387]: I0308 03:11:18.609307 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.609434 master-0 kubenswrapper[7387]: E0308 03:11:18.609361 7387 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 03:11:18.609434 master-0 kubenswrapper[7387]: E0308 03:11:18.609400 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:19.109385384 +0000 UTC m=+15.503861065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : configmap "audit-0" not found Mar 08 03:11:18.610952 master-0 kubenswrapper[7387]: I0308 03:11:18.610290 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.610952 master-0 kubenswrapper[7387]: I0308 03:11:18.610345 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.610952 master-0 kubenswrapper[7387]: E0308 03:11:18.610399 7387 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 03:11:18.610952 master-0 kubenswrapper[7387]: E0308 03:11:18.610426 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:19.110415211 +0000 UTC m=+15.504890892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "etcd-client" not found Mar 08 03:11:18.610952 master-0 kubenswrapper[7387]: I0308 03:11:18.610444 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.611318 master-0 kubenswrapper[7387]: E0308 03:11:18.611277 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:18.611318 master-0 kubenswrapper[7387]: E0308 03:11:18.611312 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:19.111303095 +0000 UTC m=+15.505778776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "serving-cert" not found Mar 08 03:11:18.611406 master-0 kubenswrapper[7387]: I0308 03:11:18.611383 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.611725 master-0 kubenswrapper[7387]: I0308 03:11:18.611686 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.625465 master-0 kubenswrapper[7387]: I0308 03:11:18.625436 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:18.634436 master-0 kubenswrapper[7387]: I0308 03:11:18.634389 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bpgc\" (UniqueName: \"kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:19.118061 master-0 kubenswrapper[7387]: I0308 03:11:19.117998 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:19.118061 master-0 kubenswrapper[7387]: I0308 03:11:19.118041 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:19.118541 master-0 kubenswrapper[7387]: I0308 03:11:19.118280 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:19.118541 master-0 kubenswrapper[7387]: E0308 03:11:19.118292 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:19.118541 master-0 kubenswrapper[7387]: E0308 03:11:19.118399 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.118382024 +0000 UTC m=+16.512857705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "serving-cert" not found Mar 08 03:11:19.118628 master-0 kubenswrapper[7387]: E0308 03:11:19.118544 7387 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 03:11:19.118628 master-0 kubenswrapper[7387]: E0308 03:11:19.118583 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.118569009 +0000 UTC m=+16.513044690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "etcd-client" not found Mar 08 03:11:19.118628 master-0 kubenswrapper[7387]: E0308 03:11:19.118613 7387 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 03:11:19.118628 master-0 kubenswrapper[7387]: E0308 03:11:19.118630 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:20.118624801 +0000 UTC m=+16.513100482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : configmap "audit-0" not found Mar 08 03:11:20.044900 master-0 kubenswrapper[7387]: I0308 03:11:20.044537 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:11:20.131690 master-0 kubenswrapper[7387]: I0308 03:11:20.131646 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:20.132597 master-0 kubenswrapper[7387]: I0308 03:11:20.132571 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:20.132789 master-0 kubenswrapper[7387]: E0308 03:11:20.131996 7387 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: secret "etcd-client" not found Mar 08 03:11:20.132856 master-0 kubenswrapper[7387]: I0308 03:11:20.132757 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:20.132856 master-0 kubenswrapper[7387]: E0308 03:11:20.132827 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:22.132807359 +0000 UTC m=+18.527283030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "etcd-client" not found Mar 08 03:11:20.132856 master-0 kubenswrapper[7387]: E0308 03:11:20.132640 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:20.133047 master-0 kubenswrapper[7387]: E0308 03:11:20.132962 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:22.132936913 +0000 UTC m=+18.527412604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "serving-cert" not found Mar 08 03:11:20.133184 master-0 kubenswrapper[7387]: E0308 03:11:20.133165 7387 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 03:11:20.133309 master-0 kubenswrapper[7387]: E0308 03:11:20.133293 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:22.133280092 +0000 UTC m=+18.527755783 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : configmap "audit-0" not found Mar 08 03:11:20.313593 master-0 kubenswrapper[7387]: I0308 03:11:20.313535 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-qjpkx"] Mar 08 03:11:20.314132 master-0 kubenswrapper[7387]: I0308 03:11:20.314100 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.335749 master-0 kubenswrapper[7387]: I0308 03:11:20.335694 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.335749 master-0 kubenswrapper[7387]: I0308 03:11:20.335741 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336121 master-0 kubenswrapper[7387]: I0308 03:11:20.335778 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4tj\" (UniqueName: \"kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336121 master-0 kubenswrapper[7387]: I0308 03:11:20.335874 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336121 master-0 kubenswrapper[7387]: I0308 03:11:20.335977 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336121 master-0 kubenswrapper[7387]: I0308 03:11:20.336007 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336350 master-0 kubenswrapper[7387]: I0308 03:11:20.336104 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336350 master-0 kubenswrapper[7387]: I0308 03:11:20.336231 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336350 master-0 kubenswrapper[7387]: I0308 03:11:20.336270 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336350 master-0 kubenswrapper[7387]: I0308 03:11:20.336344 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336574 master-0 kubenswrapper[7387]: I0308 03:11:20.336486 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336574 master-0 kubenswrapper[7387]: I0308 03:11:20.336526 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336721 master-0 kubenswrapper[7387]: I0308 03:11:20.336680 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.336844 master-0 kubenswrapper[7387]: I0308 03:11:20.336727 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.445948 master-0 kubenswrapper[7387]: I0308 03:11:20.445811 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.445948 master-0 kubenswrapper[7387]: I0308 03:11:20.445863 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.445948 master-0 kubenswrapper[7387]: I0308 03:11:20.445937 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446227 master-0 kubenswrapper[7387]: I0308 03:11:20.446180 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446320 master-0 kubenswrapper[7387]: I0308 03:11:20.446279 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446557 master-0 kubenswrapper[7387]: I0308 03:11:20.446520 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446610 master-0 kubenswrapper[7387]: I0308 03:11:20.446568 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446657 master-0 kubenswrapper[7387]: I0308 03:11:20.446641 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4tj\" (UniqueName: \"kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446697 master-0 kubenswrapper[7387]: I0308 03:11:20.446674 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446745 master-0 kubenswrapper[7387]: I0308 03:11:20.446703 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446799 master-0 kubenswrapper[7387]: I0308 03:11:20.446754 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446850 master-0 kubenswrapper[7387]: I0308 03:11:20.446796 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446889 master-0 kubenswrapper[7387]: I0308 03:11:20.446861 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.446978 master-0 kubenswrapper[7387]: I0308 03:11:20.446894 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447221 master-0 kubenswrapper[7387]: I0308 03:11:20.447184 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447265 master-0 kubenswrapper[7387]: I0308 03:11:20.447179 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447265 master-0 kubenswrapper[7387]: I0308 03:11:20.447234 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447350 master-0 kubenswrapper[7387]: I0308 03:11:20.447312 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447399 master-0 kubenswrapper[7387]: I0308 03:11:20.447365 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447438 master-0 kubenswrapper[7387]: I0308 03:11:20.447402 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447666 master-0 kubenswrapper[7387]: I0308 03:11:20.447630 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447731 master-0 kubenswrapper[7387]: I0308 03:11:20.447695 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447777 master-0 kubenswrapper[7387]: I0308 03:11:20.447717 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.447822 master-0 kubenswrapper[7387]: I0308 03:11:20.447807 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.448116 master-0 kubenswrapper[7387]: I0308 03:11:20.448066 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.453310 master-0 kubenswrapper[7387]: I0308 03:11:20.453269 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.456326 master-0 kubenswrapper[7387]: I0308 03:11:20.456277 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.478230 master-0 kubenswrapper[7387]: I0308 03:11:20.478185 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4tj\" (UniqueName: \"kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.643448 master-0 kubenswrapper[7387]: I0308 03:11:20.643339 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649526 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649569 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649590 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649607 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649639 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649666 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649688 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649728 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649743 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649767 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: I0308 03:11:20.649784 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650424 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650474 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert podName:d68278f6-59d5-4bbf-b969-e47635ffd4cc nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.650459487 +0000 UTC m=+33.044935168 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert") pod "olm-operator-d64cfc9db-t659n" (UID: "d68278f6-59d5-4bbf-b969-e47635ffd4cc") : secret "olm-operator-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650752 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650857 7387 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650869 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert podName:f8711b9f-3d18-4b8d-a263-2c9af9dc68a6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.650839627 +0000 UTC m=+33.045315348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-8qznw" (UID: "f8711b9f-3d18-4b8d-a263-2c9af9dc68a6") : secret "package-server-manager-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650933 7387 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650969 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert podName:5a92a557-d023-4531-b3a3-e559af0fe358 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.650885378 +0000 UTC m=+33.045361099 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert") pod "catalog-operator-7d9c49f57b-wsswx" (UID: "5a92a557-d023-4531-b3a3-e559af0fe358") : secret "catalog-operator-serving-cert" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650973 7387 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650997 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs podName:f6ee6202-11e5-4586-ae46-075da1ad7f1a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.650985461 +0000 UTC m=+33.045461182 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs") pod "network-metrics-daemon-2l64n" (UID: "f6ee6202-11e5-4586-ae46-075da1ad7f1a") : secret "metrics-daemon-secret" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.650971 7387 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.651002 7387 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.651022 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics podName:7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.651010982 +0000 UTC m=+33.045486693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-4pgcf" (UID: "7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6") : secret "marketplace-operator-metrics" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.651046 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs podName:d5f84bd4-2803-41ff-a1d1-a549991fe895 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.651035662 +0000 UTC m=+33.045511373 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs") pod "multus-admission-controller-8d675b596-xhkzl" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895") : secret "multus-admission-controller-secret" not found Mar 08 03:11:20.651524 master-0 kubenswrapper[7387]: E0308 03:11:20.651075 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls podName:ed56c17f-7e15-4776-80a6-3ef091307e89 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.651056083 +0000 UTC m=+33.045531764 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-hzlxx" (UID: "ed56c17f-7e15-4776-80a6-3ef091307e89") : secret "cluster-monitoring-operator-tls" not found Mar 08 03:11:20.653979 master-0 kubenswrapper[7387]: I0308 03:11:20.653947 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"cluster-version-operator-745944c6b7-rs4ld\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:20.653979 master-0 kubenswrapper[7387]: I0308 03:11:20.653954 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:20.654535 master-0 kubenswrapper[7387]: I0308 03:11:20.654507 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:20.655143 master-0 kubenswrapper[7387]: I0308 03:11:20.655116 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:20.664449 master-0 kubenswrapper[7387]: W0308 03:11:20.664407 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d29f16f_e26f_4b9d_a646_230316e936a8.slice/crio-553b72df7efe7f1084b51237153402cc8d3076ca094147045a9026c098236c9b WatchSource:0}: Error finding container 553b72df7efe7f1084b51237153402cc8d3076ca094147045a9026c098236c9b: Status 404 returned error can't find the container with id 553b72df7efe7f1084b51237153402cc8d3076ca094147045a9026c098236c9b Mar 08 03:11:20.756329 master-0 kubenswrapper[7387]: I0308 03:11:20.755801 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:20.756627 master-0 kubenswrapper[7387]: I0308 03:11:20.756419 7387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:11:20.776833 master-0 kubenswrapper[7387]: I0308 03:11:20.776756 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:11:20.928642 master-0 kubenswrapper[7387]: I0308 03:11:20.928556 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:11:20.930362 master-0 kubenswrapper[7387]: I0308 03:11:20.930109 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:20.940530 master-0 kubenswrapper[7387]: I0308 03:11:20.940162 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:11:20.940530 master-0 kubenswrapper[7387]: I0308 03:11:20.940289 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:11:21.048810 master-0 kubenswrapper[7387]: I0308 03:11:21.048699 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerStarted","Data":"a90adc87011fbb7cd1968febcefc0ce682e90d9df30e52bef5969b7cab457d60"} Mar 08 03:11:21.052371 master-0 kubenswrapper[7387]: I0308 03:11:21.052333 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" event={"ID":"5d29f16f-e26f-4b9d-a646-230316e936a8","Type":"ContainerStarted","Data":"c7f1b996cd404618937fc5382eb0e4eedfb1a26f7cb240bf95624436c3eb41cc"} Mar 08 03:11:21.053814 master-0 kubenswrapper[7387]: I0308 03:11:21.052377 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" event={"ID":"5d29f16f-e26f-4b9d-a646-230316e936a8","Type":"ContainerStarted","Data":"553b72df7efe7f1084b51237153402cc8d3076ca094147045a9026c098236c9b"} Mar 08 03:11:21.058298 master-0 kubenswrapper[7387]: I0308 03:11:21.057552 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:21.058298 master-0 kubenswrapper[7387]: E0308 03:11:21.057739 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:21.058298 master-0 kubenswrapper[7387]: E0308 03:11:21.057796 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:29.05777992 +0000 UTC m=+25.452255611 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:21.058298 master-0 kubenswrapper[7387]: I0308 03:11:21.058159 7387 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="768949e4d93a435cb37be6fb573bf2225669a3e078f13a7117be88e9456f605b" exitCode=0 Mar 08 03:11:21.058298 master-0 kubenswrapper[7387]: I0308 03:11:21.058197 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerDied","Data":"768949e4d93a435cb37be6fb573bf2225669a3e078f13a7117be88e9456f605b"} Mar 08 03:11:21.058651 master-0 kubenswrapper[7387]: I0308 03:11:21.058574 7387 scope.go:117] "RemoveContainer" containerID="768949e4d93a435cb37be6fb573bf2225669a3e078f13a7117be88e9456f605b" Mar 08 03:11:21.067738 master-0 kubenswrapper[7387]: I0308 03:11:21.067356 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" event={"ID":"1f7c9726-057b-4c5c-8a03-9bc407dedb9b","Type":"ContainerStarted","Data":"dfcfcec74b59c8edece18562777369d3232bedeeb026d96b158dd486250793d3"} Mar 08 03:11:21.108983 master-0 kubenswrapper[7387]: I0308 03:11:21.107779 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" podStartSLOduration=1.107749486 podStartE2EDuration="1.107749486s" podCreationTimestamp="2026-03-08 03:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:21.106259417 +0000 UTC m=+17.500735098" watchObservedRunningTime="2026-03-08 03:11:21.107749486 +0000 UTC m=+17.502225167" Mar 08 03:11:21.141405 master-0 kubenswrapper[7387]: I0308 03:11:21.141351 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq"] Mar 08 03:11:21.171517 master-0 kubenswrapper[7387]: I0308 03:11:21.171465 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-9mhwc"] Mar 08 03:11:21.176014 master-0 kubenswrapper[7387]: W0308 03:11:21.175972 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd82cf0db_0891_482d_856b_1675843042dd.slice/crio-f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312 WatchSource:0}: Error finding container f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312: Status 404 returned error can't find the container with id f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312 Mar 08 03:11:21.182787 master-0 kubenswrapper[7387]: W0308 03:11:21.182678 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef16d7ae_66aa_45d4_b1a6_1327738a46bb.slice/crio-f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912 WatchSource:0}: Error finding container f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912: Status 404 returned error can't find the container with id f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912 Mar 08 03:11:21.194440 master-0 kubenswrapper[7387]: I0308 03:11:21.194398 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-4bpl8"] Mar 08 03:11:21.204726 master-0 kubenswrapper[7387]: W0308 03:11:21.204681 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod197afe92_5912_4e90_a477_e3abe001bbc7.slice/crio-323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907 WatchSource:0}: Error finding container 323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907: Status 404 returned error can't find the container with id 323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907 Mar 08 03:11:22.077009 master-0 kubenswrapper[7387]: I0308 03:11:22.076225 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerStarted","Data":"817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361"} Mar 08 03:11:22.077456 master-0 kubenswrapper[7387]: I0308 03:11:22.077412 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" event={"ID":"ef16d7ae-66aa-45d4-b1a6-1327738a46bb","Type":"ContainerStarted","Data":"f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912"} Mar 08 03:11:22.078393 master-0 kubenswrapper[7387]: I0308 03:11:22.078352 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907"} Mar 08 03:11:22.079804 master-0 kubenswrapper[7387]: I0308 03:11:22.079772 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" event={"ID":"d82cf0db-0891-482d-856b-1675843042dd","Type":"ContainerStarted","Data":"f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312"} Mar 08 03:11:22.178645 master-0 kubenswrapper[7387]: I0308 03:11:22.177170 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:22.178645 master-0 kubenswrapper[7387]: I0308 03:11:22.177222 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:22.178645 master-0 kubenswrapper[7387]: E0308 03:11:22.177327 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:22.178645 master-0 kubenswrapper[7387]: E0308 03:11:22.177371 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:26.177356315 +0000 UTC m=+22.571831996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : secret "serving-cert" not found Mar 08 03:11:22.178645 master-0 kubenswrapper[7387]: I0308 03:11:22.177657 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:22.181686 master-0 kubenswrapper[7387]: E0308 03:11:22.181661 7387 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 08 03:11:22.181784 master-0 kubenswrapper[7387]: E0308 03:11:22.181720 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit podName:a4382075-d76b-4f2e-9ef1-5bc0bcb5d083 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:26.181705759 +0000 UTC m=+22.576181440 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit") pod "apiserver-5f6db9bdd8-5hlgc" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083") : configmap "audit-0" not found Mar 08 03:11:22.191500 master-0 kubenswrapper[7387]: I0308 03:11:22.191437 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"apiserver-5f6db9bdd8-5hlgc\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:22.640926 master-0 kubenswrapper[7387]: I0308 03:11:22.640755 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5f6db9bdd8-5hlgc"] Mar 08 03:11:22.641103 master-0 kubenswrapper[7387]: E0308 03:11:22.641008 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" podUID="a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" Mar 08 03:11:23.088594 master-0 kubenswrapper[7387]: I0308 03:11:23.088550 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:25.224849 master-0 kubenswrapper[7387]: I0308 03:11:25.224774 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:25.325455 master-0 kubenswrapper[7387]: I0308 03:11:25.325376 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.325687 master-0 kubenswrapper[7387]: I0308 03:11:25.325485 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.325687 master-0 kubenswrapper[7387]: I0308 03:11:25.325567 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.326765 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.326847 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.326878 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.327039 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.327094 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.327156 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.327442 master-0 kubenswrapper[7387]: I0308 03:11:25.327214 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bpgc\" (UniqueName: \"kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc\") pod \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\" (UID: \"a4382075-d76b-4f2e-9ef1-5bc0bcb5d083\") " Mar 08 03:11:25.328001 master-0 kubenswrapper[7387]: I0308 03:11:25.327953 7387 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.328797 master-0 kubenswrapper[7387]: I0308 03:11:25.327694 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:25.328952 master-0 kubenswrapper[7387]: I0308 03:11:25.327896 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:25.328952 master-0 kubenswrapper[7387]: I0308 03:11:25.327846 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:25.328952 master-0 kubenswrapper[7387]: I0308 03:11:25.328277 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config" (OuterVolumeSpecName: "config") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:25.328952 master-0 kubenswrapper[7387]: I0308 03:11:25.328708 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:25.331975 master-0 kubenswrapper[7387]: I0308 03:11:25.331879 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:25.332763 master-0 kubenswrapper[7387]: I0308 03:11:25.332704 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc" (OuterVolumeSpecName: "kube-api-access-9bpgc") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "kube-api-access-9bpgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:25.335254 master-0 kubenswrapper[7387]: I0308 03:11:25.335194 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" (UID: "a4382075-d76b-4f2e-9ef1-5bc0bcb5d083"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.430881 7387 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.430950 7387 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.430968 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.430985 7387 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.431002 7387 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.431014 7387 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.431025 7387 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.432941 master-0 kubenswrapper[7387]: I0308 03:11:25.431036 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bpgc\" (UniqueName: \"kubernetes.io/projected/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-kube-api-access-9bpgc\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:25.455948 master-0 kubenswrapper[7387]: I0308 03:11:25.455057 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:11:25.455948 master-0 kubenswrapper[7387]: I0308 03:11:25.455633 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.461838 master-0 kubenswrapper[7387]: I0308 03:11:25.461131 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 03:11:25.507992 master-0 kubenswrapper[7387]: I0308 03:11:25.504042 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:11:25.541310 master-0 kubenswrapper[7387]: I0308 03:11:25.540276 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.541310 master-0 kubenswrapper[7387]: I0308 03:11:25.540349 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.541310 master-0 kubenswrapper[7387]: I0308 03:11:25.540470 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.642408 master-0 kubenswrapper[7387]: I0308 03:11:25.642363 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.642580 master-0 kubenswrapper[7387]: I0308 03:11:25.642556 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.643051 master-0 kubenswrapper[7387]: I0308 03:11:25.642937 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.654632 master-0 kubenswrapper[7387]: I0308 03:11:25.654584 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.654699 master-0 kubenswrapper[7387]: I0308 03:11:25.654683 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.674015 master-0 kubenswrapper[7387]: I0308 03:11:25.673962 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:25.844688 master-0 kubenswrapper[7387]: I0308 03:11:25.844629 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:25.844688 master-0 kubenswrapper[7387]: I0308 03:11:25.844686 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") pod \"route-controller-manager-55c6bff5f-rc8k9\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:25.844897 master-0 kubenswrapper[7387]: E0308 03:11:25.844769 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:25.844897 master-0 kubenswrapper[7387]: E0308 03:11:25.844817 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:41.844800454 +0000 UTC m=+38.239276135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : configmap "client-ca" not found Mar 08 03:11:25.845122 master-0 kubenswrapper[7387]: E0308 03:11:25.845074 7387 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 08 03:11:25.845180 master-0 kubenswrapper[7387]: E0308 03:11:25.845163 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert podName:ba00bf40-26c1-4eb6-b540-a32cb4ece9a2 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:41.845144293 +0000 UTC m=+38.239619974 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert") pod "route-controller-manager-55c6bff5f-rc8k9" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2") : secret "serving-cert" not found Mar 08 03:11:25.879923 master-0 kubenswrapper[7387]: I0308 03:11:25.874265 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:11:26.109487 master-0 kubenswrapper[7387]: I0308 03:11:26.109391 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5f6db9bdd8-5hlgc" Mar 08 03:11:26.145856 master-0 kubenswrapper[7387]: I0308 03:11:26.145803 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5bf974f84f-hzx44"] Mar 08 03:11:26.146771 master-0 kubenswrapper[7387]: I0308 03:11:26.146731 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.152112 master-0 kubenswrapper[7387]: I0308 03:11:26.152075 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 03:11:26.152341 master-0 kubenswrapper[7387]: I0308 03:11:26.152303 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 03:11:26.152450 master-0 kubenswrapper[7387]: I0308 03:11:26.152429 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 03:11:26.152558 master-0 kubenswrapper[7387]: I0308 03:11:26.152522 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 03:11:26.152656 master-0 kubenswrapper[7387]: I0308 03:11:26.152635 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 03:11:26.153537 master-0 kubenswrapper[7387]: I0308 03:11:26.153514 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 03:11:26.161175 master-0 kubenswrapper[7387]: I0308 03:11:26.161146 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 03:11:26.162448 master-0 kubenswrapper[7387]: I0308 03:11:26.162408 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 03:11:26.166120 master-0 kubenswrapper[7387]: I0308 03:11:26.163105 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 03:11:26.169403 master-0 kubenswrapper[7387]: I0308 03:11:26.169365 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 03:11:26.170169 master-0 kubenswrapper[7387]: I0308 03:11:26.170146 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5f6db9bdd8-5hlgc"] Mar 08 03:11:26.170242 master-0 kubenswrapper[7387]: I0308 03:11:26.170183 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5bf974f84f-hzx44"] Mar 08 03:11:26.172988 master-0 kubenswrapper[7387]: I0308 03:11:26.172943 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-5f6db9bdd8-5hlgc"] Mar 08 03:11:26.261817 master-0 kubenswrapper[7387]: I0308 03:11:26.261760 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.261827 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.261868 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.261974 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262059 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262169 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262210 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262315 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262371 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9vkx\" (UniqueName: \"kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262384 master-0 kubenswrapper[7387]: I0308 03:11:26.262395 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262811 master-0 kubenswrapper[7387]: I0308 03:11:26.262438 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.262811 master-0 kubenswrapper[7387]: I0308 03:11:26.262592 7387 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-audit\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:26.262811 master-0 kubenswrapper[7387]: I0308 03:11:26.262611 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:26.365960 master-0 kubenswrapper[7387]: I0308 03:11:26.365759 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9vkx\" (UniqueName: \"kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366221 master-0 kubenswrapper[7387]: I0308 03:11:26.366016 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366221 master-0 kubenswrapper[7387]: I0308 03:11:26.366069 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366221 master-0 kubenswrapper[7387]: I0308 03:11:26.366139 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366221 master-0 kubenswrapper[7387]: I0308 03:11:26.366208 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366459 master-0 kubenswrapper[7387]: I0308 03:11:26.366246 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366505 master-0 kubenswrapper[7387]: I0308 03:11:26.366475 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.366547 master-0 kubenswrapper[7387]: I0308 03:11:26.366514 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367215 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367297 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367543 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367577 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367623 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.367698 master-0 kubenswrapper[7387]: I0308 03:11:26.367651 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.368408 master-0 kubenswrapper[7387]: E0308 03:11:26.368122 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:26.368408 master-0 kubenswrapper[7387]: E0308 03:11:26.368288 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:26.868161693 +0000 UTC m=+23.262637374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : secret "serving-cert" not found Mar 08 03:11:26.368723 master-0 kubenswrapper[7387]: I0308 03:11:26.368672 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.370325 master-0 kubenswrapper[7387]: I0308 03:11:26.370299 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.370491 master-0 kubenswrapper[7387]: I0308 03:11:26.370471 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.371203 master-0 kubenswrapper[7387]: I0308 03:11:26.371151 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.374250 master-0 kubenswrapper[7387]: I0308 03:11:26.374221 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.384150 master-0 kubenswrapper[7387]: I0308 03:11:26.384119 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.388436 master-0 kubenswrapper[7387]: I0308 03:11:26.388383 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9vkx\" (UniqueName: \"kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.877561 master-0 kubenswrapper[7387]: I0308 03:11:26.876944 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:26.877561 master-0 kubenswrapper[7387]: E0308 03:11:26.877117 7387 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 08 03:11:26.877561 master-0 kubenswrapper[7387]: E0308 03:11:26.877358 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:27.877324306 +0000 UTC m=+24.271800027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : secret "serving-cert" not found Mar 08 03:11:27.609146 master-0 kubenswrapper[7387]: I0308 03:11:27.608951 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 03:11:27.609740 master-0 kubenswrapper[7387]: I0308 03:11:27.609463 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.611469 master-0 kubenswrapper[7387]: I0308 03:11:27.611398 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 08 03:11:27.621010 master-0 kubenswrapper[7387]: I0308 03:11:27.619932 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 03:11:27.692152 master-0 kubenswrapper[7387]: I0308 03:11:27.692100 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.692348 master-0 kubenswrapper[7387]: I0308 03:11:27.692189 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.692348 master-0 kubenswrapper[7387]: I0308 03:11:27.692275 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.765079 master-0 kubenswrapper[7387]: I0308 03:11:27.764120 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4382075-d76b-4f2e-9ef1-5bc0bcb5d083" path="/var/lib/kubelet/pods/a4382075-d76b-4f2e-9ef1-5bc0bcb5d083/volumes" Mar 08 03:11:27.793075 master-0 kubenswrapper[7387]: I0308 03:11:27.793016 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.793312 master-0 kubenswrapper[7387]: I0308 03:11:27.793141 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.793398 master-0 kubenswrapper[7387]: I0308 03:11:27.793369 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.793568 master-0 kubenswrapper[7387]: I0308 03:11:27.793541 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.793665 master-0 kubenswrapper[7387]: I0308 03:11:27.793649 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.818536 master-0 kubenswrapper[7387]: I0308 03:11:27.818491 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.894307 master-0 kubenswrapper[7387]: I0308 03:11:27.894175 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:27.897086 master-0 kubenswrapper[7387]: I0308 03:11:27.897040 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:27.948347 master-0 kubenswrapper[7387]: I0308 03:11:27.948289 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 03:11:27.962756 master-0 kubenswrapper[7387]: I0308 03:11:27.962716 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:28.329031 master-0 kubenswrapper[7387]: I0308 03:11:28.328976 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5bf974f84f-hzx44"] Mar 08 03:11:28.531928 master-0 kubenswrapper[7387]: I0308 03:11:28.531821 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 08 03:11:28.544180 master-0 kubenswrapper[7387]: W0308 03:11:28.543589 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poded2e0194_6b50_4478_aba4_21193d2c18aa.slice/crio-5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150 WatchSource:0}: Error finding container 5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150: Status 404 returned error can't find the container with id 5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150 Mar 08 03:11:28.557817 master-0 kubenswrapper[7387]: I0308 03:11:28.557769 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:11:28.657132 master-0 kubenswrapper[7387]: I0308 03:11:28.652884 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb"] Mar 08 03:11:28.657132 master-0 kubenswrapper[7387]: E0308 03:11:28.653270 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" podUID="861865c2-a446-4bbf-ad71-7900d991f207" Mar 08 03:11:28.757377 master-0 kubenswrapper[7387]: I0308 03:11:28.754765 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9"] Mar 08 03:11:28.757377 master-0 kubenswrapper[7387]: E0308 03:11:28.756830 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" podUID="ba00bf40-26c1-4eb6-b540-a32cb4ece9a2" Mar 08 03:11:29.068921 master-0 kubenswrapper[7387]: I0308 03:11:29.067669 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-p6kjc"] Mar 08 03:11:29.068921 master-0 kubenswrapper[7387]: I0308 03:11:29.068521 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.072012 master-0 kubenswrapper[7387]: I0308 03:11:29.071814 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 03:11:29.072012 master-0 kubenswrapper[7387]: I0308 03:11:29.071971 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 03:11:29.072200 master-0 kubenswrapper[7387]: I0308 03:11:29.072054 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 03:11:29.074116 master-0 kubenswrapper[7387]: I0308 03:11:29.074089 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 03:11:29.078406 master-0 kubenswrapper[7387]: I0308 03:11:29.078230 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p6kjc"] Mar 08 03:11:29.113357 master-0 kubenswrapper[7387]: I0308 03:11:29.113301 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8l6s\" (UniqueName: \"kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.113537 master-0 kubenswrapper[7387]: I0308 03:11:29.113415 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.113537 master-0 kubenswrapper[7387]: I0308 03:11:29.113439 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") pod \"controller-manager-855f6f6d7d-t5fdb\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:29.113537 master-0 kubenswrapper[7387]: I0308 03:11:29.113502 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.113675 master-0 kubenswrapper[7387]: E0308 03:11:29.113648 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:29.113713 master-0 kubenswrapper[7387]: E0308 03:11:29.113693 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca podName:861865c2-a446-4bbf-ad71-7900d991f207 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:45.113678851 +0000 UTC m=+41.508154532 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca") pod "controller-manager-855f6f6d7d-t5fdb" (UID: "861865c2-a446-4bbf-ad71-7900d991f207") : configmap "client-ca" not found Mar 08 03:11:29.146970 master-0 kubenswrapper[7387]: I0308 03:11:29.144278 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" event={"ID":"f2057f75-159d-4416-a234-050f0fe1afc9","Type":"ContainerStarted","Data":"7edd93db0d8a06f729ecca24b4b7c8fc7864a838f800dec0e7d8fc63c8370d81"} Mar 08 03:11:29.146970 master-0 kubenswrapper[7387]: I0308 03:11:29.146273 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" event={"ID":"ef16d7ae-66aa-45d4-b1a6-1327738a46bb","Type":"ContainerStarted","Data":"bf7e48182a2358cbf539e101a98beda3a04464d0addf881fb80ee90253ea269e"} Mar 08 03:11:29.146970 master-0 kubenswrapper[7387]: I0308 03:11:29.146299 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" event={"ID":"ef16d7ae-66aa-45d4-b1a6-1327738a46bb","Type":"ContainerStarted","Data":"2c6c3d4cca51d25b40215b41daca093a3dcf0bdf36ebf40cc1e01e88c360dbc5"} Mar 08 03:11:29.148964 master-0 kubenswrapper[7387]: I0308 03:11:29.148761 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"ed2e0194-6b50-4478-aba4-21193d2c18aa","Type":"ContainerStarted","Data":"d2e9db5795871d92c7d2a7895a4e9d84c621a83e058c0b33df388b4e6b8eebdb"} Mar 08 03:11:29.148964 master-0 kubenswrapper[7387]: I0308 03:11:29.148800 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"ed2e0194-6b50-4478-aba4-21193d2c18aa","Type":"ContainerStarted","Data":"5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150"} Mar 08 03:11:29.153614 master-0 kubenswrapper[7387]: I0308 03:11:29.153541 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" event={"ID":"1f7c9726-057b-4c5c-8a03-9bc407dedb9b","Type":"ContainerStarted","Data":"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8"} Mar 08 03:11:29.167074 master-0 kubenswrapper[7387]: I0308 03:11:29.166977 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"fb1999a92de1731bb7581c9cab11a88227624768ed8711ac87f7791288dffadd"} Mar 08 03:11:29.167074 master-0 kubenswrapper[7387]: I0308 03:11:29.167022 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"11de5739554b7c94cfe0fa61f3b1195f2e9f62f484bc837ca53fa9727626c6dd"} Mar 08 03:11:29.174649 master-0 kubenswrapper[7387]: I0308 03:11:29.174611 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" event={"ID":"d82cf0db-0891-482d-856b-1675843042dd","Type":"ContainerStarted","Data":"500c7b149f4f2f095cf355a9cad0c5ca80a3d389709c1ca8a3ccda38df4eb432"} Mar 08 03:11:29.176365 master-0 kubenswrapper[7387]: I0308 03:11:29.176036 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:29.176498 master-0 kubenswrapper[7387]: I0308 03:11:29.176449 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8b8c5365-e7a0-4f69-913f-2e12b142e4a5","Type":"ContainerStarted","Data":"2c219d2ffed7988b04169d2e3c20b8b683dd3d20eb4e97983e2ec6007ff4233d"} Mar 08 03:11:29.176545 master-0 kubenswrapper[7387]: I0308 03:11:29.176503 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8b8c5365-e7a0-4f69-913f-2e12b142e4a5","Type":"ContainerStarted","Data":"66dc9b6e365401bbecd33295a9a91f35bfb68325d8da1da36b865bca1ae7caa4"} Mar 08 03:11:29.176717 master-0 kubenswrapper[7387]: I0308 03:11:29.176592 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:29.199394 master-0 kubenswrapper[7387]: I0308 03:11:29.199364 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:29.199769 master-0 kubenswrapper[7387]: I0308 03:11:29.199741 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:29.214736 master-0 kubenswrapper[7387]: I0308 03:11:29.214688 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8l6s\" (UniqueName: \"kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.214974 master-0 kubenswrapper[7387]: I0308 03:11:29.214787 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.214974 master-0 kubenswrapper[7387]: I0308 03:11:29.214870 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.215895 master-0 kubenswrapper[7387]: I0308 03:11:29.215865 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.216754 master-0 kubenswrapper[7387]: E0308 03:11:29.216725 7387 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 08 03:11:29.216831 master-0 kubenswrapper[7387]: E0308 03:11:29.216765 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls podName:9b090750-b893-42fe-8def-dfb3f4253d43 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:29.716753345 +0000 UTC m=+26.111229026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls") pod "dns-default-p6kjc" (UID: "9b090750-b893-42fe-8def-dfb3f4253d43") : secret "dns-default-metrics-tls" not found Mar 08 03:11:29.253578 master-0 kubenswrapper[7387]: I0308 03:11:29.252403 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.252386613 podStartE2EDuration="2.252386613s" podCreationTimestamp="2026-03-08 03:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:29.245759808 +0000 UTC m=+25.640235489" watchObservedRunningTime="2026-03-08 03:11:29.252386613 +0000 UTC m=+25.646862294" Mar 08 03:11:29.256046 master-0 kubenswrapper[7387]: I0308 03:11:29.255008 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8l6s\" (UniqueName: \"kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.295382 master-0 kubenswrapper[7387]: I0308 03:11:29.295238 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=4.295219 podStartE2EDuration="4.295219s" podCreationTimestamp="2026-03-08 03:11:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:29.271448855 +0000 UTC m=+25.665924536" watchObservedRunningTime="2026-03-08 03:11:29.295219 +0000 UTC m=+25.689694671" Mar 08 03:11:29.315978 master-0 kubenswrapper[7387]: I0308 03:11:29.315931 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config\") pod \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " Mar 08 03:11:29.316127 master-0 kubenswrapper[7387]: I0308 03:11:29.315986 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbr6g\" (UniqueName: \"kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g\") pod \"861865c2-a446-4bbf-ad71-7900d991f207\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " Mar 08 03:11:29.316127 master-0 kubenswrapper[7387]: I0308 03:11:29.316025 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config\") pod \"861865c2-a446-4bbf-ad71-7900d991f207\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " Mar 08 03:11:29.316127 master-0 kubenswrapper[7387]: I0308 03:11:29.316047 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles\") pod \"861865c2-a446-4bbf-ad71-7900d991f207\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " Mar 08 03:11:29.316127 master-0 kubenswrapper[7387]: I0308 03:11:29.316070 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") pod \"861865c2-a446-4bbf-ad71-7900d991f207\" (UID: \"861865c2-a446-4bbf-ad71-7900d991f207\") " Mar 08 03:11:29.316127 master-0 kubenswrapper[7387]: I0308 03:11:29.316089 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65dtl\" (UniqueName: \"kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl\") pod \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\" (UID: \"ba00bf40-26c1-4eb6-b540-a32cb4ece9a2\") " Mar 08 03:11:29.465764 master-0 kubenswrapper[7387]: I0308 03:11:29.317682 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config" (OuterVolumeSpecName: "config") pod "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:29.465764 master-0 kubenswrapper[7387]: I0308 03:11:29.319507 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config" (OuterVolumeSpecName: "config") pod "861865c2-a446-4bbf-ad71-7900d991f207" (UID: "861865c2-a446-4bbf-ad71-7900d991f207"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:29.466428 master-0 kubenswrapper[7387]: I0308 03:11:29.466376 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "861865c2-a446-4bbf-ad71-7900d991f207" (UID: "861865c2-a446-4bbf-ad71-7900d991f207"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:29.467230 master-0 kubenswrapper[7387]: I0308 03:11:29.467184 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl" (OuterVolumeSpecName: "kube-api-access-65dtl") pod "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2" (UID: "ba00bf40-26c1-4eb6-b540-a32cb4ece9a2"). InnerVolumeSpecName "kube-api-access-65dtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:29.471502 master-0 kubenswrapper[7387]: I0308 03:11:29.468591 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g" (OuterVolumeSpecName: "kube-api-access-cbr6g") pod "861865c2-a446-4bbf-ad71-7900d991f207" (UID: "861865c2-a446-4bbf-ad71-7900d991f207"). InnerVolumeSpecName "kube-api-access-cbr6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:29.471502 master-0 kubenswrapper[7387]: I0308 03:11:29.468604 7387 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.471502 master-0 kubenswrapper[7387]: I0308 03:11:29.468655 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65dtl\" (UniqueName: \"kubernetes.io/projected/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-kube-api-access-65dtl\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.471502 master-0 kubenswrapper[7387]: I0308 03:11:29.468668 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.471502 master-0 kubenswrapper[7387]: I0308 03:11:29.468678 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.474017 master-0 kubenswrapper[7387]: I0308 03:11:29.472016 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "861865c2-a446-4bbf-ad71-7900d991f207" (UID: "861865c2-a446-4bbf-ad71-7900d991f207"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:29.572387 master-0 kubenswrapper[7387]: I0308 03:11:29.570856 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/861865c2-a446-4bbf-ad71-7900d991f207-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.572387 master-0 kubenswrapper[7387]: I0308 03:11:29.570892 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbr6g\" (UniqueName: \"kubernetes.io/projected/861865c2-a446-4bbf-ad71-7900d991f207-kube-api-access-cbr6g\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:29.620434 master-0 kubenswrapper[7387]: I0308 03:11:29.620378 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mps4n"] Mar 08 03:11:29.621110 master-0 kubenswrapper[7387]: I0308 03:11:29.621093 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.673131 master-0 kubenswrapper[7387]: I0308 03:11:29.673093 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.673652 master-0 kubenswrapper[7387]: I0308 03:11:29.673633 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrq96\" (UniqueName: \"kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.774501 master-0 kubenswrapper[7387]: I0308 03:11:29.774447 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.774694 master-0 kubenswrapper[7387]: I0308 03:11:29.774655 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.774989 master-0 kubenswrapper[7387]: I0308 03:11:29.774876 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.774989 master-0 kubenswrapper[7387]: I0308 03:11:29.774941 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrq96\" (UniqueName: \"kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.778047 master-0 kubenswrapper[7387]: I0308 03:11:29.777812 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:29.793698 master-0 kubenswrapper[7387]: I0308 03:11:29.793643 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrq96\" (UniqueName: \"kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.943037 master-0 kubenswrapper[7387]: I0308 03:11:29.942977 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mps4n" Mar 08 03:11:29.959805 master-0 kubenswrapper[7387]: W0308 03:11:29.959747 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf520fbf8_9403_46bc_9381_226a3a1ed1c7.slice/crio-888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3 WatchSource:0}: Error finding container 888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3: Status 404 returned error can't find the container with id 888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3 Mar 08 03:11:30.041250 master-0 kubenswrapper[7387]: I0308 03:11:30.041139 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:30.180335 master-0 kubenswrapper[7387]: I0308 03:11:30.180287 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mps4n" event={"ID":"f520fbf8-9403-46bc-9381-226a3a1ed1c7","Type":"ContainerStarted","Data":"888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3"} Mar 08 03:11:30.181153 master-0 kubenswrapper[7387]: I0308 03:11:30.181120 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb" Mar 08 03:11:30.181509 master-0 kubenswrapper[7387]: I0308 03:11:30.181480 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9" Mar 08 03:11:31.117964 master-0 kubenswrapper[7387]: I0308 03:11:31.113634 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-758ff9f665-bmgpk"] Mar 08 03:11:31.117964 master-0 kubenswrapper[7387]: I0308 03:11:31.114232 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.117964 master-0 kubenswrapper[7387]: I0308 03:11:31.117738 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:11:31.118564 master-0 kubenswrapper[7387]: I0308 03:11:31.118163 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:31.118601 master-0 kubenswrapper[7387]: I0308 03:11:31.118580 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:11:31.121937 master-0 kubenswrapper[7387]: I0308 03:11:31.118827 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:11:31.121937 master-0 kubenswrapper[7387]: I0308 03:11:31.121420 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:31.136222 master-0 kubenswrapper[7387]: I0308 03:11:31.136162 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:11:31.190326 master-0 kubenswrapper[7387]: I0308 03:11:31.190270 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mps4n" event={"ID":"f520fbf8-9403-46bc-9381-226a3a1ed1c7","Type":"ContainerStarted","Data":"048f8b317f590921d5e8542bf17279f35891720d62dece73d2cda0161863eb23"} Mar 08 03:11:31.209732 master-0 kubenswrapper[7387]: I0308 03:11:31.209689 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.209732 master-0 kubenswrapper[7387]: I0308 03:11:31.209726 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spknp\" (UniqueName: \"kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.209875 master-0 kubenswrapper[7387]: I0308 03:11:31.209755 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.209875 master-0 kubenswrapper[7387]: I0308 03:11:31.209827 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.209985 master-0 kubenswrapper[7387]: I0308 03:11:31.209882 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.311085 master-0 kubenswrapper[7387]: I0308 03:11:31.311012 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.311320 master-0 kubenswrapper[7387]: I0308 03:11:31.311238 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.311785 master-0 kubenswrapper[7387]: I0308 03:11:31.311708 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.311940 master-0 kubenswrapper[7387]: E0308 03:11:31.311795 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:31.311940 master-0 kubenswrapper[7387]: E0308 03:11:31.311852 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:31.81183676 +0000 UTC m=+28.206312441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:31.312163 master-0 kubenswrapper[7387]: I0308 03:11:31.311801 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spknp\" (UniqueName: \"kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.312163 master-0 kubenswrapper[7387]: I0308 03:11:31.312136 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.314932 master-0 kubenswrapper[7387]: I0308 03:11:31.313474 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.314932 master-0 kubenswrapper[7387]: I0308 03:11:31.313885 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.322120 master-0 kubenswrapper[7387]: I0308 03:11:31.322054 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.483395 master-0 kubenswrapper[7387]: I0308 03:11:31.483280 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb"] Mar 08 03:11:31.483395 master-0 kubenswrapper[7387]: I0308 03:11:31.483346 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-p6kjc"] Mar 08 03:11:31.483395 master-0 kubenswrapper[7387]: I0308 03:11:31.483363 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-758ff9f665-bmgpk"] Mar 08 03:11:31.500333 master-0 kubenswrapper[7387]: W0308 03:11:31.500283 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b090750_b893_42fe_8def_dfb3f4253d43.slice/crio-361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526 WatchSource:0}: Error finding container 361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526: Status 404 returned error can't find the container with id 361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526 Mar 08 03:11:31.818638 master-0 kubenswrapper[7387]: I0308 03:11:31.818594 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:31.818923 master-0 kubenswrapper[7387]: E0308 03:11:31.818729 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:31.818923 master-0 kubenswrapper[7387]: E0308 03:11:31.818779 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:32.818764865 +0000 UTC m=+29.213240536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:31.974402 master-0 kubenswrapper[7387]: I0308 03:11:31.974355 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-855f6f6d7d-t5fdb"] Mar 08 03:11:31.987737 master-0 kubenswrapper[7387]: I0308 03:11:31.987706 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spknp\" (UniqueName: \"kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:32.020328 master-0 kubenswrapper[7387]: I0308 03:11:32.020287 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/861865c2-a446-4bbf-ad71-7900d991f207-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:32.197419 master-0 kubenswrapper[7387]: I0308 03:11:32.197293 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6kjc" event={"ID":"9b090750-b893-42fe-8def-dfb3f4253d43","Type":"ContainerStarted","Data":"361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526"} Mar 08 03:11:32.829977 master-0 kubenswrapper[7387]: I0308 03:11:32.828693 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:32.829977 master-0 kubenswrapper[7387]: E0308 03:11:32.828825 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:32.829977 master-0 kubenswrapper[7387]: E0308 03:11:32.828883 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:34.828867778 +0000 UTC m=+31.223343459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:33.361184 master-0 kubenswrapper[7387]: I0308 03:11:33.361128 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7b545788fb-82rjl"] Mar 08 03:11:33.367005 master-0 kubenswrapper[7387]: I0308 03:11:33.361920 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.379560 master-0 kubenswrapper[7387]: I0308 03:11:33.379019 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 03:11:33.379560 master-0 kubenswrapper[7387]: I0308 03:11:33.379092 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 03:11:33.379560 master-0 kubenswrapper[7387]: I0308 03:11:33.379245 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 03:11:33.379560 master-0 kubenswrapper[7387]: I0308 03:11:33.379367 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 03:11:33.388151 master-0 kubenswrapper[7387]: I0308 03:11:33.386784 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 03:11:33.388151 master-0 kubenswrapper[7387]: I0308 03:11:33.387045 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 03:11:33.388151 master-0 kubenswrapper[7387]: I0308 03:11:33.387163 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 03:11:33.388151 master-0 kubenswrapper[7387]: I0308 03:11:33.387700 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 03:11:33.440697 master-0 kubenswrapper[7387]: I0308 03:11:33.440645 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gf5\" (UniqueName: \"kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.440877 master-0 kubenswrapper[7387]: I0308 03:11:33.440717 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.440877 master-0 kubenswrapper[7387]: I0308 03:11:33.440739 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.440877 master-0 kubenswrapper[7387]: I0308 03:11:33.440794 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.440877 master-0 kubenswrapper[7387]: I0308 03:11:33.440813 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.440877 master-0 kubenswrapper[7387]: I0308 03:11:33.440842 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.441072 master-0 kubenswrapper[7387]: I0308 03:11:33.440949 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.441072 master-0 kubenswrapper[7387]: I0308 03:11:33.441007 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542465 master-0 kubenswrapper[7387]: I0308 03:11:33.542368 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gf5\" (UniqueName: \"kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542465 master-0 kubenswrapper[7387]: I0308 03:11:33.542469 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542979 master-0 kubenswrapper[7387]: I0308 03:11:33.542487 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542979 master-0 kubenswrapper[7387]: I0308 03:11:33.542544 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542979 master-0 kubenswrapper[7387]: I0308 03:11:33.542754 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542979 master-0 kubenswrapper[7387]: I0308 03:11:33.542788 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.542979 master-0 kubenswrapper[7387]: I0308 03:11:33.542851 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.543306 master-0 kubenswrapper[7387]: I0308 03:11:33.543257 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.543306 master-0 kubenswrapper[7387]: I0308 03:11:33.543303 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.543579 master-0 kubenswrapper[7387]: I0308 03:11:33.543546 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.543931 master-0 kubenswrapper[7387]: I0308 03:11:33.543870 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.544237 master-0 kubenswrapper[7387]: I0308 03:11:33.544187 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.546163 master-0 kubenswrapper[7387]: I0308 03:11:33.546102 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.546431 master-0 kubenswrapper[7387]: I0308 03:11:33.546404 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.556747 master-0 kubenswrapper[7387]: I0308 03:11:33.556689 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.661925 master-0 kubenswrapper[7387]: I0308 03:11:33.661770 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b545788fb-82rjl"] Mar 08 03:11:33.675564 master-0 kubenswrapper[7387]: I0308 03:11:33.675528 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gf5\" (UniqueName: \"kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.683682 master-0 kubenswrapper[7387]: I0308 03:11:33.683605 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mps4n" podStartSLOduration=4.6835856190000005 podStartE2EDuration="4.683585619s" podCreationTimestamp="2026-03-08 03:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:33.682335777 +0000 UTC m=+30.076811478" watchObservedRunningTime="2026-03-08 03:11:33.683585619 +0000 UTC m=+30.078061300" Mar 08 03:11:33.719737 master-0 kubenswrapper[7387]: I0308 03:11:33.718986 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:33.772746 master-0 kubenswrapper[7387]: I0308 03:11:33.772700 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861865c2-a446-4bbf-ad71-7900d991f207" path="/var/lib/kubelet/pods/861865c2-a446-4bbf-ad71-7900d991f207/volumes" Mar 08 03:11:34.687882 master-0 kubenswrapper[7387]: I0308 03:11:34.687798 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm"] Mar 08 03:11:34.691945 master-0 kubenswrapper[7387]: I0308 03:11:34.688947 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.705440 master-0 kubenswrapper[7387]: I0308 03:11:34.705355 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:11:34.705732 master-0 kubenswrapper[7387]: I0308 03:11:34.705534 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:11:34.705732 master-0 kubenswrapper[7387]: I0308 03:11:34.705717 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:34.706039 master-0 kubenswrapper[7387]: I0308 03:11:34.705855 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:34.706039 master-0 kubenswrapper[7387]: I0308 03:11:34.705959 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:11:34.803617 master-0 kubenswrapper[7387]: I0308 03:11:34.803539 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdnh\" (UniqueName: \"kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.803617 master-0 kubenswrapper[7387]: I0308 03:11:34.803634 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.804155 master-0 kubenswrapper[7387]: I0308 03:11:34.804083 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.804343 master-0 kubenswrapper[7387]: I0308 03:11:34.804187 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.904893 master-0 kubenswrapper[7387]: I0308 03:11:34.904837 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdnh\" (UniqueName: \"kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.905162 master-0 kubenswrapper[7387]: I0308 03:11:34.905044 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.905162 master-0 kubenswrapper[7387]: I0308 03:11:34.905108 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:34.905249 master-0 kubenswrapper[7387]: I0308 03:11:34.905194 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.905249 master-0 kubenswrapper[7387]: I0308 03:11:34.905217 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.905459 master-0 kubenswrapper[7387]: E0308 03:11:34.905427 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:34.905532 master-0 kubenswrapper[7387]: E0308 03:11:34.905511 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:38.905485846 +0000 UTC m=+35.299961567 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:34.905802 master-0 kubenswrapper[7387]: E0308 03:11:34.905774 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:34.905846 master-0 kubenswrapper[7387]: E0308 03:11:34.905821 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca podName:48cb3a00-5875-4d62-8afd-f964c9545c65 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:35.405807785 +0000 UTC m=+31.800283476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca") pod "route-controller-manager-6ff96cfc69-gqmqm" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65") : configmap "client-ca" not found Mar 08 03:11:34.908059 master-0 kubenswrapper[7387]: I0308 03:11:34.908025 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:34.912123 master-0 kubenswrapper[7387]: I0308 03:11:34.912095 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:35.100563 master-0 kubenswrapper[7387]: I0308 03:11:35.098892 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9"] Mar 08 03:11:35.139533 master-0 kubenswrapper[7387]: I0308 03:11:35.139476 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm"] Mar 08 03:11:35.139533 master-0 kubenswrapper[7387]: I0308 03:11:35.139529 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2"] Mar 08 03:11:35.140203 master-0 kubenswrapper[7387]: I0308 03:11:35.140173 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55c6bff5f-rc8k9"] Mar 08 03:11:35.140277 master-0 kubenswrapper[7387]: I0308 03:11:35.140264 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.147548 master-0 kubenswrapper[7387]: I0308 03:11:35.147488 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 03:11:35.147726 master-0 kubenswrapper[7387]: I0308 03:11:35.147648 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 03:11:35.170019 master-0 kubenswrapper[7387]: I0308 03:11:35.154897 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 03:11:35.211590 master-0 kubenswrapper[7387]: I0308 03:11:35.211529 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.211820 master-0 kubenswrapper[7387]: I0308 03:11:35.211600 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.211820 master-0 kubenswrapper[7387]: I0308 03:11:35.211668 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.211820 master-0 kubenswrapper[7387]: I0308 03:11:35.211694 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr9bw\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.211820 master-0 kubenswrapper[7387]: I0308 03:11:35.211800 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.212015 master-0 kubenswrapper[7387]: I0308 03:11:35.211844 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:35.212015 master-0 kubenswrapper[7387]: I0308 03:11:35.211859 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:35.313127 master-0 kubenswrapper[7387]: I0308 03:11:35.313062 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.313295 master-0 kubenswrapper[7387]: I0308 03:11:35.313164 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr9bw\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.313427 master-0 kubenswrapper[7387]: I0308 03:11:35.313349 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.313427 master-0 kubenswrapper[7387]: I0308 03:11:35.313400 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.313531 master-0 kubenswrapper[7387]: I0308 03:11:35.313470 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.314233 master-0 kubenswrapper[7387]: I0308 03:11:35.314118 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.314419 master-0 kubenswrapper[7387]: I0308 03:11:35.314207 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.314501 master-0 kubenswrapper[7387]: I0308 03:11:35.314247 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.318106 master-0 kubenswrapper[7387]: I0308 03:11:35.318018 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.417087 master-0 kubenswrapper[7387]: I0308 03:11:35.415699 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:35.417087 master-0 kubenswrapper[7387]: E0308 03:11:35.416126 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:35.417087 master-0 kubenswrapper[7387]: E0308 03:11:35.416213 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca podName:48cb3a00-5875-4d62-8afd-f964c9545c65 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:36.416182641 +0000 UTC m=+32.810658352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca") pod "route-controller-manager-6ff96cfc69-gqmqm" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65") : configmap "client-ca" not found Mar 08 03:11:35.442649 master-0 kubenswrapper[7387]: I0308 03:11:35.437553 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp"] Mar 08 03:11:35.444218 master-0 kubenswrapper[7387]: I0308 03:11:35.443295 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.449708 master-0 kubenswrapper[7387]: I0308 03:11:35.448280 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2"] Mar 08 03:11:35.449708 master-0 kubenswrapper[7387]: I0308 03:11:35.448484 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 03:11:35.449708 master-0 kubenswrapper[7387]: I0308 03:11:35.449356 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 03:11:35.451853 master-0 kubenswrapper[7387]: I0308 03:11:35.450389 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 03:11:35.479262 master-0 kubenswrapper[7387]: I0308 03:11:35.479195 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 03:11:35.516772 master-0 kubenswrapper[7387]: I0308 03:11:35.516692 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.517087 master-0 kubenswrapper[7387]: I0308 03:11:35.517030 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c72dm\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.517180 master-0 kubenswrapper[7387]: I0308 03:11:35.517133 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.517253 master-0 kubenswrapper[7387]: I0308 03:11:35.517209 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.517491 master-0 kubenswrapper[7387]: I0308 03:11:35.517446 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.517547 master-0 kubenswrapper[7387]: I0308 03:11:35.517520 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.619488 master-0 kubenswrapper[7387]: I0308 03:11:35.619397 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72dm\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.619738 master-0 kubenswrapper[7387]: I0308 03:11:35.619544 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.619738 master-0 kubenswrapper[7387]: I0308 03:11:35.619610 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.619855 master-0 kubenswrapper[7387]: I0308 03:11:35.619785 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.619952 master-0 kubenswrapper[7387]: I0308 03:11:35.619879 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.620060 master-0 kubenswrapper[7387]: I0308 03:11:35.620008 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.620306 master-0 kubenswrapper[7387]: I0308 03:11:35.620262 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.622181 master-0 kubenswrapper[7387]: I0308 03:11:35.622111 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.644483 master-0 kubenswrapper[7387]: I0308 03:11:35.643471 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp"] Mar 08 03:11:35.657740 master-0 kubenswrapper[7387]: I0308 03:11:35.645895 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.657740 master-0 kubenswrapper[7387]: I0308 03:11:35.655611 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.669227 master-0 kubenswrapper[7387]: I0308 03:11:35.667095 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.690525 master-0 kubenswrapper[7387]: I0308 03:11:35.689959 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdnh\" (UniqueName: \"kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:35.719246 master-0 kubenswrapper[7387]: I0308 03:11:35.718935 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72dm\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.724849 master-0 kubenswrapper[7387]: I0308 03:11:35.724794 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr9bw\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.766343 master-0 kubenswrapper[7387]: I0308 03:11:35.765154 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba00bf40-26c1-4eb6-b540-a32cb4ece9a2" path="/var/lib/kubelet/pods/ba00bf40-26c1-4eb6-b540-a32cb4ece9a2/volumes" Mar 08 03:11:35.779379 master-0 kubenswrapper[7387]: I0308 03:11:35.776002 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:35.785321 master-0 kubenswrapper[7387]: I0308 03:11:35.785260 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:35.911957 master-0 kubenswrapper[7387]: I0308 03:11:35.911524 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7b545788fb-82rjl"] Mar 08 03:11:35.922021 master-0 kubenswrapper[7387]: W0308 03:11:35.920458 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2a141d_a4c3_4b6c_a90b_d184f61a14dd.slice/crio-b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205 WatchSource:0}: Error finding container b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205: Status 404 returned error can't find the container with id b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205 Mar 08 03:11:36.216577 master-0 kubenswrapper[7387]: I0308 03:11:36.216528 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2"] Mar 08 03:11:36.217323 master-0 kubenswrapper[7387]: I0308 03:11:36.217292 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6kjc" event={"ID":"9b090750-b893-42fe-8def-dfb3f4253d43","Type":"ContainerStarted","Data":"17de7ef678b820bcfdaedd0d23e56c95190c6f323f2a0d0eb815fe5d4033dd8c"} Mar 08 03:11:36.218362 master-0 kubenswrapper[7387]: I0308 03:11:36.218315 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" event={"ID":"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd","Type":"ContainerStarted","Data":"b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205"} Mar 08 03:11:36.220547 master-0 kubenswrapper[7387]: I0308 03:11:36.220501 7387 generic.go:334] "Generic (PLEG): container finished" podID="f2057f75-159d-4416-a234-050f0fe1afc9" containerID="440a29663d98c3dc23222b22803d7c93cc008176e47ed0828f4038b3d61a2b4c" exitCode=0 Mar 08 03:11:36.220710 master-0 kubenswrapper[7387]: I0308 03:11:36.220537 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" event={"ID":"f2057f75-159d-4416-a234-050f0fe1afc9","Type":"ContainerDied","Data":"440a29663d98c3dc23222b22803d7c93cc008176e47ed0828f4038b3d61a2b4c"} Mar 08 03:11:36.307113 master-0 kubenswrapper[7387]: I0308 03:11:36.307003 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp"] Mar 08 03:11:36.337320 master-0 kubenswrapper[7387]: W0308 03:11:36.337275 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7074cf90_9aa5_41ab_a4c4_c3e1a1044c1b.slice/crio-b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed WatchSource:0}: Error finding container b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed: Status 404 returned error can't find the container with id b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed Mar 08 03:11:36.430092 master-0 kubenswrapper[7387]: I0308 03:11:36.430024 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:36.430239 master-0 kubenswrapper[7387]: E0308 03:11:36.430205 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:36.430318 master-0 kubenswrapper[7387]: E0308 03:11:36.430304 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca podName:48cb3a00-5875-4d62-8afd-f964c9545c65 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:38.430283969 +0000 UTC m=+34.824759650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca") pod "route-controller-manager-6ff96cfc69-gqmqm" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65") : configmap "client-ca" not found Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732393 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732499 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732531 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732560 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732606 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732635 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:36.733941 master-0 kubenswrapper[7387]: I0308 03:11:36.732658 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:36.744943 master-0 kubenswrapper[7387]: I0308 03:11:36.743459 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.753473 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.757548 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.758201 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.758575 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"multus-admission-controller-8d675b596-xhkzl\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.759076 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:36.766945 master-0 kubenswrapper[7387]: I0308 03:11:36.761773 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:36.838953 master-0 kubenswrapper[7387]: I0308 03:11:36.838869 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:36.839321 master-0 kubenswrapper[7387]: I0308 03:11:36.839302 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:11:36.839557 master-0 kubenswrapper[7387]: I0308 03:11:36.839538 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:36.839895 master-0 kubenswrapper[7387]: I0308 03:11:36.839846 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:36.840051 master-0 kubenswrapper[7387]: I0308 03:11:36.839954 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:36.840325 master-0 kubenswrapper[7387]: I0308 03:11:36.840222 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:11:36.840440 master-0 kubenswrapper[7387]: I0308 03:11:36.840368 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:11:37.238526 master-0 kubenswrapper[7387]: I0308 03:11:37.236760 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-p6kjc" event={"ID":"9b090750-b893-42fe-8def-dfb3f4253d43","Type":"ContainerStarted","Data":"312b35894c75e8f5a8bd66f6cc9a6c75f208870aa7d443d3c719e7a3aa7a6840"} Mar 08 03:11:37.238526 master-0 kubenswrapper[7387]: I0308 03:11:37.236880 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:37.241089 master-0 kubenswrapper[7387]: I0308 03:11:37.241013 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerStarted","Data":"847ec71b717fbc403d7670e2fb6fcb0eb16c5961bfffd67ba80ebb137144703d"} Mar 08 03:11:37.241089 master-0 kubenswrapper[7387]: I0308 03:11:37.241060 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerStarted","Data":"bdcefdd75b70a05e06ef82c47f20ca576f0969ce90111e774b57c7400f29d26f"} Mar 08 03:11:37.241089 master-0 kubenswrapper[7387]: I0308 03:11:37.241071 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerStarted","Data":"b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed"} Mar 08 03:11:37.241308 master-0 kubenswrapper[7387]: I0308 03:11:37.241186 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:37.244018 master-0 kubenswrapper[7387]: I0308 03:11:37.243179 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" event={"ID":"f2057f75-159d-4416-a234-050f0fe1afc9","Type":"ContainerStarted","Data":"e254357448a32791651722dccdb0cdaf437e5c3f65ad3ed7dc808ee28c5ad63d"} Mar 08 03:11:37.244018 master-0 kubenswrapper[7387]: I0308 03:11:37.243198 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" event={"ID":"f2057f75-159d-4416-a234-050f0fe1afc9","Type":"ContainerStarted","Data":"7caba3f1c8cef4f2c4a50478308d278b9530778d34c2c9351d827eccffb7d81c"} Mar 08 03:11:37.246520 master-0 kubenswrapper[7387]: I0308 03:11:37.246468 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerStarted","Data":"378037d391e3dcdb0053d43d901a7ee5851d33db4a955457fbfbc1974763fcfd"} Mar 08 03:11:37.246585 master-0 kubenswrapper[7387]: I0308 03:11:37.246521 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerStarted","Data":"a8f3f14f501b72ff362550257f13a332eecf70ec4f446aeb3d199baf5fd9fcca"} Mar 08 03:11:37.246585 master-0 kubenswrapper[7387]: I0308 03:11:37.246538 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerStarted","Data":"5f8a5dd7ddb9e30727d036901155a403a90563b27d3748f6e9c804013b40f108"} Mar 08 03:11:37.247117 master-0 kubenswrapper[7387]: I0308 03:11:37.247083 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:37.257419 master-0 kubenswrapper[7387]: I0308 03:11:37.257159 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-p6kjc" podStartSLOduration=4.000323092 podStartE2EDuration="8.257140317s" podCreationTimestamp="2026-03-08 03:11:29 +0000 UTC" firstStartedPulling="2026-03-08 03:11:31.501488093 +0000 UTC m=+27.895963774" lastFinishedPulling="2026-03-08 03:11:35.758305328 +0000 UTC m=+32.152780999" observedRunningTime="2026-03-08 03:11:37.255146254 +0000 UTC m=+33.649621965" watchObservedRunningTime="2026-03-08 03:11:37.257140317 +0000 UTC m=+33.651615998" Mar 08 03:11:37.296924 master-0 kubenswrapper[7387]: I0308 03:11:37.296825 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" podStartSLOduration=7.8943871439999995 podStartE2EDuration="15.296800651s" podCreationTimestamp="2026-03-08 03:11:22 +0000 UTC" firstStartedPulling="2026-03-08 03:11:28.355999774 +0000 UTC m=+24.750475455" lastFinishedPulling="2026-03-08 03:11:35.758413281 +0000 UTC m=+32.152888962" observedRunningTime="2026-03-08 03:11:37.29222506 +0000 UTC m=+33.686700741" watchObservedRunningTime="2026-03-08 03:11:37.296800651 +0000 UTC m=+33.691276332" Mar 08 03:11:37.310720 master-0 kubenswrapper[7387]: I0308 03:11:37.308105 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podStartSLOduration=3.308086348 podStartE2EDuration="3.308086348s" podCreationTimestamp="2026-03-08 03:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:37.307573955 +0000 UTC m=+33.702049656" watchObservedRunningTime="2026-03-08 03:11:37.308086348 +0000 UTC m=+33.702562029" Mar 08 03:11:37.385207 master-0 kubenswrapper[7387]: I0308 03:11:37.385147 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podStartSLOduration=3.385131215 podStartE2EDuration="3.385131215s" podCreationTimestamp="2026-03-08 03:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:37.384896399 +0000 UTC m=+33.779372100" watchObservedRunningTime="2026-03-08 03:11:37.385131215 +0000 UTC m=+33.779606896" Mar 08 03:11:37.466593 master-0 kubenswrapper[7387]: I0308 03:11:37.466538 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2l64n"] Mar 08 03:11:37.467312 master-0 kubenswrapper[7387]: I0308 03:11:37.467282 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:11:37.468619 master-0 kubenswrapper[7387]: I0308 03:11:37.468592 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx"] Mar 08 03:11:37.479232 master-0 kubenswrapper[7387]: I0308 03:11:37.470881 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf"] Mar 08 03:11:37.479232 master-0 kubenswrapper[7387]: I0308 03:11:37.475106 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx"] Mar 08 03:11:37.479232 master-0 kubenswrapper[7387]: W0308 03:11:37.477049 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6ee6202_11e5_4586_ae46_075da1ad7f1a.slice/crio-3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459 WatchSource:0}: Error finding container 3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459: Status 404 returned error can't find the container with id 3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459 Mar 08 03:11:37.485326 master-0 kubenswrapper[7387]: W0308 03:11:37.485292 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f84bd4_2803_41ff_a1d1_a549991fe895.slice/crio-b303d9907e09a871fa5a36f0194c592a76421a2844b95a9ceaaef97f1d545abf WatchSource:0}: Error finding container b303d9907e09a871fa5a36f0194c592a76421a2844b95a9ceaaef97f1d545abf: Status 404 returned error can't find the container with id b303d9907e09a871fa5a36f0194c592a76421a2844b95a9ceaaef97f1d545abf Mar 08 03:11:37.486579 master-0 kubenswrapper[7387]: W0308 03:11:37.486551 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b0f0192_f2ab_4d6c_bf74_2b149bdaefe6.slice/crio-e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5 WatchSource:0}: Error finding container e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5: Status 404 returned error can't find the container with id e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5 Mar 08 03:11:37.660361 master-0 kubenswrapper[7387]: I0308 03:11:37.660317 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw"] Mar 08 03:11:37.699417 master-0 kubenswrapper[7387]: I0308 03:11:37.697049 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n"] Mar 08 03:11:37.706085 master-0 kubenswrapper[7387]: W0308 03:11:37.706032 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd68278f6_59d5_4bbf_b969_e47635ffd4cc.slice/crio-1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b WatchSource:0}: Error finding container 1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b: Status 404 returned error can't find the container with id 1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b Mar 08 03:11:37.964007 master-0 kubenswrapper[7387]: I0308 03:11:37.963307 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:37.964007 master-0 kubenswrapper[7387]: I0308 03:11:37.964017 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:37.970742 master-0 kubenswrapper[7387]: I0308 03:11:37.970695 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:11:37.970930 master-0 kubenswrapper[7387]: I0308 03:11:37.970868 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" containerName="installer" containerID="cri-o://2c219d2ffed7988b04169d2e3c20b8b683dd3d20eb4e97983e2ec6007ff4233d" gracePeriod=30 Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: I0308 03:11:37.977144 7387 patch_prober.go:28] interesting pod/apiserver-5bf974f84f-hzx44 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]log ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]etcd ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/generic-apiserver-start-informers ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/max-in-flight-filter ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/project.openshift.io-projectcache ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/openshift.io-startinformers ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: livez check failed Mar 08 03:11:37.977517 master-0 kubenswrapper[7387]: I0308 03:11:37.977179 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" podUID="f2057f75-159d-4416-a234-050f0fe1afc9" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:11:38.259827 master-0 kubenswrapper[7387]: I0308 03:11:38.259728 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerStarted","Data":"f3c48cd42f9b900a3418582add786503fcc3f612245b2515c1f6387a810d482a"} Mar 08 03:11:38.259827 master-0 kubenswrapper[7387]: I0308 03:11:38.259788 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerStarted","Data":"5ffe2f08a61a9faac98a304d7e3f26296109a1c759116e58c683819c7d929612"} Mar 08 03:11:38.263938 master-0 kubenswrapper[7387]: I0308 03:11:38.263887 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerStarted","Data":"e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5"} Mar 08 03:11:38.265911 master-0 kubenswrapper[7387]: I0308 03:11:38.265873 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" event={"ID":"5a92a557-d023-4531-b3a3-e559af0fe358","Type":"ContainerStarted","Data":"901d5d72687a570475c0c1ccb8e78c8e542036296238b7606d96a86beb5c35c7"} Mar 08 03:11:38.267300 master-0 kubenswrapper[7387]: I0308 03:11:38.267277 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" event={"ID":"ed56c17f-7e15-4776-80a6-3ef091307e89","Type":"ContainerStarted","Data":"c955986a722d7c797742e1c5d2eda34143fb5f9b3ba2a0f15453a1ce4e4cb127"} Mar 08 03:11:38.268689 master-0 kubenswrapper[7387]: I0308 03:11:38.268654 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" event={"ID":"d68278f6-59d5-4bbf-b969-e47635ffd4cc","Type":"ContainerStarted","Data":"1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b"} Mar 08 03:11:38.270102 master-0 kubenswrapper[7387]: I0308 03:11:38.270055 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2l64n" event={"ID":"f6ee6202-11e5-4586-ae46-075da1ad7f1a","Type":"ContainerStarted","Data":"3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459"} Mar 08 03:11:38.271379 master-0 kubenswrapper[7387]: I0308 03:11:38.271361 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerStarted","Data":"b303d9907e09a871fa5a36f0194c592a76421a2844b95a9ceaaef97f1d545abf"} Mar 08 03:11:38.477922 master-0 kubenswrapper[7387]: I0308 03:11:38.477098 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:38.477922 master-0 kubenswrapper[7387]: E0308 03:11:38.477350 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:38.477922 master-0 kubenswrapper[7387]: E0308 03:11:38.477398 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca podName:48cb3a00-5875-4d62-8afd-f964c9545c65 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:42.477384851 +0000 UTC m=+38.871860532 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca") pod "route-controller-manager-6ff96cfc69-gqmqm" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65") : configmap "client-ca" not found Mar 08 03:11:38.989133 master-0 kubenswrapper[7387]: I0308 03:11:38.986565 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:38.989133 master-0 kubenswrapper[7387]: E0308 03:11:38.986699 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:38.989133 master-0 kubenswrapper[7387]: E0308 03:11:38.986753 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:11:46.98673928 +0000 UTC m=+43.381214961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:39.618516 master-0 kubenswrapper[7387]: I0308 03:11:39.618463 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:11:40.572339 master-0 kubenswrapper[7387]: I0308 03:11:40.572292 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:40.573197 master-0 kubenswrapper[7387]: I0308 03:11:40.573169 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.579913 master-0 kubenswrapper[7387]: I0308 03:11:40.579874 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:40.707691 master-0 kubenswrapper[7387]: I0308 03:11:40.707544 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.707927 master-0 kubenswrapper[7387]: I0308 03:11:40.707757 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.707927 master-0 kubenswrapper[7387]: I0308 03:11:40.707834 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.811967 master-0 kubenswrapper[7387]: I0308 03:11:40.811782 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.811967 master-0 kubenswrapper[7387]: I0308 03:11:40.811866 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.812338 master-0 kubenswrapper[7387]: I0308 03:11:40.811990 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.812338 master-0 kubenswrapper[7387]: I0308 03:11:40.812117 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.812338 master-0 kubenswrapper[7387]: I0308 03:11:40.812163 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.833468 master-0 kubenswrapper[7387]: I0308 03:11:40.833341 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access\") pod \"installer-2-master-0\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:40.911211 master-0 kubenswrapper[7387]: I0308 03:11:40.911061 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:41.789880 master-0 kubenswrapper[7387]: I0308 03:11:41.787538 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:41.806041 master-0 kubenswrapper[7387]: W0308 03:11:41.805999 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd5b0bb96_9fcd_426d_abb7_aa3ec6bcbb75.slice/crio-ab218e481e6b65c60b8d01ae90ba379f9494fedc6779f71bcb8886d790d6b966 WatchSource:0}: Error finding container ab218e481e6b65c60b8d01ae90ba379f9494fedc6779f71bcb8886d790d6b966: Status 404 returned error can't find the container with id ab218e481e6b65c60b8d01ae90ba379f9494fedc6779f71bcb8886d790d6b966 Mar 08 03:11:42.299400 master-0 kubenswrapper[7387]: I0308 03:11:42.299351 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2l64n" event={"ID":"f6ee6202-11e5-4586-ae46-075da1ad7f1a","Type":"ContainerStarted","Data":"6d6396292c8936e8df87fb65043427182cee4053d1f425af348adb6d62a4e94c"} Mar 08 03:11:42.299400 master-0 kubenswrapper[7387]: I0308 03:11:42.299405 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2l64n" event={"ID":"f6ee6202-11e5-4586-ae46-075da1ad7f1a","Type":"ContainerStarted","Data":"86887b025aa4238648ac7a93a17045dee635cad9e620fc88362f8dfc7f883747"} Mar 08 03:11:42.302001 master-0 kubenswrapper[7387]: I0308 03:11:42.301963 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerStarted","Data":"207b42b97b0cc7b2a3b3fe717f857e83a1274408fc29faf61812a15be3fc5f86"} Mar 08 03:11:42.302254 master-0 kubenswrapper[7387]: I0308 03:11:42.302216 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:42.309267 master-0 kubenswrapper[7387]: I0308 03:11:42.309116 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" event={"ID":"ed56c17f-7e15-4776-80a6-3ef091307e89","Type":"ContainerStarted","Data":"102fd777f42f6eb70d9d1aae6252e9020f8f7fcf02aacf07c84de12107c6d1ca"} Mar 08 03:11:42.318432 master-0 kubenswrapper[7387]: I0308 03:11:42.313427 7387 generic.go:334] "Generic (PLEG): container finished" podID="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" containerID="b02be813c757aa8825e328781683d790be0707b1273d725c9eedbb7404cb32df" exitCode=0 Mar 08 03:11:42.318432 master-0 kubenswrapper[7387]: I0308 03:11:42.313470 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" event={"ID":"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd","Type":"ContainerDied","Data":"b02be813c757aa8825e328781683d790be0707b1273d725c9eedbb7404cb32df"} Mar 08 03:11:42.318432 master-0 kubenswrapper[7387]: I0308 03:11:42.316389 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerStarted","Data":"4c27d8bf0fe82333d5a0263568559ac58eb59de0b0e67b1c1334b664b1330158"} Mar 08 03:11:42.318432 master-0 kubenswrapper[7387]: I0308 03:11:42.316415 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerStarted","Data":"d8908e02467ded566e9d23379f605a2e44df49bd48cf230c5b0b05ea8c4f6b21"} Mar 08 03:11:42.321696 master-0 kubenswrapper[7387]: I0308 03:11:42.321014 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:11:42.322992 master-0 kubenswrapper[7387]: I0308 03:11:42.322912 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75","Type":"ContainerStarted","Data":"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682"} Mar 08 03:11:42.323295 master-0 kubenswrapper[7387]: I0308 03:11:42.323235 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75","Type":"ContainerStarted","Data":"ab218e481e6b65c60b8d01ae90ba379f9494fedc6779f71bcb8886d790d6b966"} Mar 08 03:11:42.439132 master-0 kubenswrapper[7387]: I0308 03:11:42.438980 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.438954463 podStartE2EDuration="2.438954463s" podCreationTimestamp="2026-03-08 03:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:42.438342377 +0000 UTC m=+38.832818078" watchObservedRunningTime="2026-03-08 03:11:42.438954463 +0000 UTC m=+38.833430144" Mar 08 03:11:42.536224 master-0 kubenswrapper[7387]: I0308 03:11:42.536137 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") pod \"route-controller-manager-6ff96cfc69-gqmqm\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:42.536534 master-0 kubenswrapper[7387]: E0308 03:11:42.536306 7387 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:42.536534 master-0 kubenswrapper[7387]: E0308 03:11:42.536403 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca podName:48cb3a00-5875-4d62-8afd-f964c9545c65 nodeName:}" failed. No retries permitted until 2026-03-08 03:11:50.536379458 +0000 UTC m=+46.930855209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca") pod "route-controller-manager-6ff96cfc69-gqmqm" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65") : configmap "client-ca" not found Mar 08 03:11:42.970117 master-0 kubenswrapper[7387]: I0308 03:11:42.970057 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:42.974058 master-0 kubenswrapper[7387]: I0308 03:11:42.974026 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:11:43.334390 master-0 kubenswrapper[7387]: I0308 03:11:43.334299 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" event={"ID":"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd","Type":"ContainerStarted","Data":"c53242ecbbb784242fbec696e38427a49110ad19dfc869e9e7a62d362410a1fb"} Mar 08 03:11:43.350313 master-0 kubenswrapper[7387]: I0308 03:11:43.350253 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" podStartSLOduration=5.926701774 podStartE2EDuration="11.350233284s" podCreationTimestamp="2026-03-08 03:11:32 +0000 UTC" firstStartedPulling="2026-03-08 03:11:35.922957873 +0000 UTC m=+32.317433554" lastFinishedPulling="2026-03-08 03:11:41.346489393 +0000 UTC m=+37.740965064" observedRunningTime="2026-03-08 03:11:43.348501578 +0000 UTC m=+39.742977259" watchObservedRunningTime="2026-03-08 03:11:43.350233284 +0000 UTC m=+39.744708955" Mar 08 03:11:43.715395 master-0 kubenswrapper[7387]: I0308 03:11:43.715192 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld"] Mar 08 03:11:43.715739 master-0 kubenswrapper[7387]: I0308 03:11:43.715413 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" podUID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" containerName="cluster-version-operator" containerID="cri-o://1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8" gracePeriod=130 Mar 08 03:11:43.735649 master-0 kubenswrapper[7387]: I0308 03:11:43.731653 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:43.735649 master-0 kubenswrapper[7387]: I0308 03:11:43.731734 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:43.759146 master-0 kubenswrapper[7387]: I0308 03:11:43.758168 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:44.346601 master-0 kubenswrapper[7387]: I0308 03:11:44.346429 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:11:45.292962 master-0 kubenswrapper[7387]: I0308 03:11:45.291890 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 03:11:45.292962 master-0 kubenswrapper[7387]: I0308 03:11:45.292491 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.296381 master-0 kubenswrapper[7387]: I0308 03:11:45.296335 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:45.311137 master-0 kubenswrapper[7387]: I0308 03:11:45.311090 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 03:11:45.387820 master-0 kubenswrapper[7387]: I0308 03:11:45.387773 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.387820 master-0 kubenswrapper[7387]: I0308 03:11:45.387816 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.388442 master-0 kubenswrapper[7387]: I0308 03:11:45.387864 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.489621 master-0 kubenswrapper[7387]: I0308 03:11:45.489485 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.489794 master-0 kubenswrapper[7387]: I0308 03:11:45.489651 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.489794 master-0 kubenswrapper[7387]: I0308 03:11:45.489706 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.489794 master-0 kubenswrapper[7387]: I0308 03:11:45.489719 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.490165 master-0 kubenswrapper[7387]: I0308 03:11:45.490054 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.510592 master-0 kubenswrapper[7387]: I0308 03:11:45.510535 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.625286 master-0 kubenswrapper[7387]: I0308 03:11:45.625221 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:11:45.780297 master-0 kubenswrapper[7387]: I0308 03:11:45.780196 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:11:45.796051 master-0 kubenswrapper[7387]: I0308 03:11:45.795972 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:11:47.016945 master-0 kubenswrapper[7387]: I0308 03:11:47.014852 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") pod \"controller-manager-758ff9f665-bmgpk\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:47.016945 master-0 kubenswrapper[7387]: E0308 03:11:47.014986 7387 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 08 03:11:47.016945 master-0 kubenswrapper[7387]: E0308 03:11:47.015045 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca podName:63debde5-3369-4cfb-9c82-95690671d24a nodeName:}" failed. No retries permitted until 2026-03-08 03:12:03.015031534 +0000 UTC m=+59.409507215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca") pod "controller-manager-758ff9f665-bmgpk" (UID: "63debde5-3369-4cfb-9c82-95690671d24a") : configmap "client-ca" not found Mar 08 03:11:47.321870 master-0 kubenswrapper[7387]: I0308 03:11:47.321697 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:47.356684 master-0 kubenswrapper[7387]: I0308 03:11:47.356640 7387 generic.go:334] "Generic (PLEG): container finished" podID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" containerID="1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8" exitCode=0 Mar 08 03:11:47.356754 master-0 kubenswrapper[7387]: I0308 03:11:47.356687 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" event={"ID":"1f7c9726-057b-4c5c-8a03-9bc407dedb9b","Type":"ContainerDied","Data":"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8"} Mar 08 03:11:47.356754 master-0 kubenswrapper[7387]: I0308 03:11:47.356714 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" event={"ID":"1f7c9726-057b-4c5c-8a03-9bc407dedb9b","Type":"ContainerDied","Data":"dfcfcec74b59c8edece18562777369d3232bedeeb026d96b158dd486250793d3"} Mar 08 03:11:47.356754 master-0 kubenswrapper[7387]: I0308 03:11:47.356743 7387 scope.go:117] "RemoveContainer" containerID="1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8" Mar 08 03:11:47.356845 master-0 kubenswrapper[7387]: I0308 03:11:47.356833 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld" Mar 08 03:11:47.373198 master-0 kubenswrapper[7387]: I0308 03:11:47.373164 7387 scope.go:117] "RemoveContainer" containerID="1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8" Mar 08 03:11:47.374639 master-0 kubenswrapper[7387]: E0308 03:11:47.374602 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8\": container with ID starting with 1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8 not found: ID does not exist" containerID="1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8" Mar 08 03:11:47.374779 master-0 kubenswrapper[7387]: I0308 03:11:47.374644 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8"} err="failed to get container status \"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8\": rpc error: code = NotFound desc = could not find container \"1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8\": container with ID starting with 1c9245f15bcf54e25cb203fc917b2d4f93cb10e986b15b06f5c877aef0dd40b8 not found: ID does not exist" Mar 08 03:11:47.419938 master-0 kubenswrapper[7387]: I0308 03:11:47.419885 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") pod \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " Mar 08 03:11:47.420019 master-0 kubenswrapper[7387]: I0308 03:11:47.419962 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") pod \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " Mar 08 03:11:47.420019 master-0 kubenswrapper[7387]: I0308 03:11:47.419991 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") pod \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " Mar 08 03:11:47.420085 master-0 kubenswrapper[7387]: I0308 03:11:47.420068 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") pod \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " Mar 08 03:11:47.420115 master-0 kubenswrapper[7387]: I0308 03:11:47.420083 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "1f7c9726-057b-4c5c-8a03-9bc407dedb9b" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:47.420147 master-0 kubenswrapper[7387]: I0308 03:11:47.420113 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") pod \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\" (UID: \"1f7c9726-057b-4c5c-8a03-9bc407dedb9b\") " Mar 08 03:11:47.420353 master-0 kubenswrapper[7387]: I0308 03:11:47.420331 7387 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:47.420390 master-0 kubenswrapper[7387]: I0308 03:11:47.420376 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "1f7c9726-057b-4c5c-8a03-9bc407dedb9b" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:47.420669 master-0 kubenswrapper[7387]: I0308 03:11:47.420605 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca" (OuterVolumeSpecName: "service-ca") pod "1f7c9726-057b-4c5c-8a03-9bc407dedb9b" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:47.429063 master-0 kubenswrapper[7387]: I0308 03:11:47.429006 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f7c9726-057b-4c5c-8a03-9bc407dedb9b" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:47.433339 master-0 kubenswrapper[7387]: I0308 03:11:47.431137 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1f7c9726-057b-4c5c-8a03-9bc407dedb9b" (UID: "1f7c9726-057b-4c5c-8a03-9bc407dedb9b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:47.521548 master-0 kubenswrapper[7387]: I0308 03:11:47.521487 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:47.521548 master-0 kubenswrapper[7387]: I0308 03:11:47.521543 7387 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:47.521774 master-0 kubenswrapper[7387]: I0308 03:11:47.521561 7387 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:47.521774 master-0 kubenswrapper[7387]: I0308 03:11:47.521570 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1f7c9726-057b-4c5c-8a03-9bc407dedb9b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:47.662016 master-0 kubenswrapper[7387]: I0308 03:11:47.661031 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 08 03:11:47.693105 master-0 kubenswrapper[7387]: I0308 03:11:47.691891 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld"] Mar 08 03:11:47.697969 master-0 kubenswrapper[7387]: I0308 03:11:47.696247 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-rs4ld"] Mar 08 03:11:47.733764 master-0 kubenswrapper[7387]: I0308 03:11:47.733664 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86"] Mar 08 03:11:47.734912 master-0 kubenswrapper[7387]: E0308 03:11:47.734866 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" containerName="cluster-version-operator" Mar 08 03:11:47.735246 master-0 kubenswrapper[7387]: I0308 03:11:47.735041 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" containerName="cluster-version-operator" Mar 08 03:11:47.736073 master-0 kubenswrapper[7387]: I0308 03:11:47.736038 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" containerName="cluster-version-operator" Mar 08 03:11:47.743851 master-0 kubenswrapper[7387]: I0308 03:11:47.743804 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:47.748052 master-0 kubenswrapper[7387]: I0308 03:11:47.747646 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:11:47.748052 master-0 kubenswrapper[7387]: I0308 03:11:47.747853 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:11:47.748052 master-0 kubenswrapper[7387]: I0308 03:11:47.747837 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:11:47.789927 master-0 kubenswrapper[7387]: I0308 03:11:47.787197 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7c9726-057b-4c5c-8a03-9bc407dedb9b" path="/var/lib/kubelet/pods/1f7c9726-057b-4c5c-8a03-9bc407dedb9b/volumes" Mar 08 03:11:47.926149 master-0 kubenswrapper[7387]: I0308 03:11:47.925962 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:47.926149 master-0 kubenswrapper[7387]: I0308 03:11:47.926033 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:47.926149 master-0 kubenswrapper[7387]: I0308 03:11:47.926051 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:47.926371 master-0 kubenswrapper[7387]: I0308 03:11:47.926204 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:47.926371 master-0 kubenswrapper[7387]: I0308 03:11:47.926329 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027034 master-0 kubenswrapper[7387]: I0308 03:11:48.026961 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027295 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027387 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027398 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027447 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027461 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.027779 master-0 kubenswrapper[7387]: I0308 03:11:48.027493 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.030039 master-0 kubenswrapper[7387]: I0308 03:11:48.028660 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.037875 master-0 kubenswrapper[7387]: I0308 03:11:48.037820 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.046855 master-0 kubenswrapper[7387]: I0308 03:11:48.046759 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:11:48.047227 master-0 kubenswrapper[7387]: I0308 03:11:48.047173 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.068140 master-0 kubenswrapper[7387]: I0308 03:11:48.068059 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:11:48.231950 master-0 kubenswrapper[7387]: I0308 03:11:48.231022 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758ff9f665-bmgpk"] Mar 08 03:11:48.231950 master-0 kubenswrapper[7387]: E0308 03:11:48.231342 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" podUID="63debde5-3369-4cfb-9c82-95690671d24a" Mar 08 03:11:48.254939 master-0 kubenswrapper[7387]: I0308 03:11:48.252934 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm"] Mar 08 03:11:48.254939 master-0 kubenswrapper[7387]: E0308 03:11:48.253243 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" podUID="48cb3a00-5875-4d62-8afd-f964c9545c65" Mar 08 03:11:48.362295 master-0 kubenswrapper[7387]: I0308 03:11:48.362243 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" event={"ID":"d2a53f3b-7e22-47eb-9f28-da3441b3662f","Type":"ContainerStarted","Data":"50e75d2b6ff206804802c9331065b3194c6e165af0a4d329ce7b16d5dd4ec36b"} Mar 08 03:11:48.362295 master-0 kubenswrapper[7387]: I0308 03:11:48.362293 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" event={"ID":"d2a53f3b-7e22-47eb-9f28-da3441b3662f","Type":"ContainerStarted","Data":"63df01fd9ed048d9f095f5eeea9d96eeca7e15c41770d9375fbe4be8cc706183"} Mar 08 03:11:48.364483 master-0 kubenswrapper[7387]: I0308 03:11:48.364445 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerStarted","Data":"61085a1c0f60df971fea9a09a95423c547ccb46d0bf74149a0614fd843a50e98"} Mar 08 03:11:48.365030 master-0 kubenswrapper[7387]: I0308 03:11:48.364994 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:11:48.368474 master-0 kubenswrapper[7387]: I0308 03:11:48.368444 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" event={"ID":"d68278f6-59d5-4bbf-b969-e47635ffd4cc","Type":"ContainerStarted","Data":"4b9ff8823a34c9354082d2f43b74069c86dc37cef5c844e998a60db85f9b57bd"} Mar 08 03:11:48.369389 master-0 kubenswrapper[7387]: I0308 03:11:48.369228 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:48.370475 master-0 kubenswrapper[7387]: I0308 03:11:48.370395 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0a8d4b89-fd81-4418-9f72-c8447fad86ad","Type":"ContainerStarted","Data":"0cb275b613648ba82dd895945a8f72c136f919a1708eb582688a065e13a9ce66"} Mar 08 03:11:48.370475 master-0 kubenswrapper[7387]: I0308 03:11:48.370424 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0a8d4b89-fd81-4418-9f72-c8447fad86ad","Type":"ContainerStarted","Data":"5e69232ee32af2930950dbc1ce8dd12459189b96461d880072fd507e99455d62"} Mar 08 03:11:48.371698 master-0 kubenswrapper[7387]: I0308 03:11:48.371669 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:48.372084 master-0 kubenswrapper[7387]: I0308 03:11:48.371840 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" containerName="installer" containerID="cri-o://ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682" gracePeriod=30 Mar 08 03:11:48.373453 master-0 kubenswrapper[7387]: I0308 03:11:48.373425 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:48.374831 master-0 kubenswrapper[7387]: I0308 03:11:48.374545 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" event={"ID":"5a92a557-d023-4531-b3a3-e559af0fe358","Type":"ContainerStarted","Data":"49ce1406037663d0653afea9d092542f168bf15a292da6650736eae9a204cfb6"} Mar 08 03:11:48.374831 master-0 kubenswrapper[7387]: I0308 03:11:48.374573 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:48.374831 master-0 kubenswrapper[7387]: I0308 03:11:48.374603 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:48.377850 master-0 kubenswrapper[7387]: I0308 03:11:48.377816 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:11:48.378460 master-0 kubenswrapper[7387]: I0308 03:11:48.378437 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:11:48.382319 master-0 kubenswrapper[7387]: I0308 03:11:48.382287 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:48.389994 master-0 kubenswrapper[7387]: I0308 03:11:48.389934 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" podStartSLOduration=1.389904238 podStartE2EDuration="1.389904238s" podCreationTimestamp="2026-03-08 03:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:48.384882616 +0000 UTC m=+44.779358287" watchObservedRunningTime="2026-03-08 03:11:48.389904238 +0000 UTC m=+44.784379919" Mar 08 03:11:48.442716 master-0 kubenswrapper[7387]: I0308 03:11:48.442112 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:48.467664 master-0 kubenswrapper[7387]: I0308 03:11:48.467494 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.46747038 podStartE2EDuration="3.46747038s" podCreationTimestamp="2026-03-08 03:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:48.466035512 +0000 UTC m=+44.860511193" watchObservedRunningTime="2026-03-08 03:11:48.46747038 +0000 UTC m=+44.861946061" Mar 08 03:11:48.534227 master-0 kubenswrapper[7387]: I0308 03:11:48.534163 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config\") pod \"48cb3a00-5875-4d62-8afd-f964c9545c65\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " Mar 08 03:11:48.534415 master-0 kubenswrapper[7387]: I0308 03:11:48.534248 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert\") pod \"48cb3a00-5875-4d62-8afd-f964c9545c65\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " Mar 08 03:11:48.534415 master-0 kubenswrapper[7387]: I0308 03:11:48.534292 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdnh\" (UniqueName: \"kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh\") pod \"48cb3a00-5875-4d62-8afd-f964c9545c65\" (UID: \"48cb3a00-5875-4d62-8afd-f964c9545c65\") " Mar 08 03:11:48.534415 master-0 kubenswrapper[7387]: I0308 03:11:48.534349 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert\") pod \"63debde5-3369-4cfb-9c82-95690671d24a\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " Mar 08 03:11:48.534415 master-0 kubenswrapper[7387]: I0308 03:11:48.534407 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles\") pod \"63debde5-3369-4cfb-9c82-95690671d24a\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " Mar 08 03:11:48.534530 master-0 kubenswrapper[7387]: I0308 03:11:48.534471 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config\") pod \"63debde5-3369-4cfb-9c82-95690671d24a\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " Mar 08 03:11:48.534530 master-0 kubenswrapper[7387]: I0308 03:11:48.534517 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spknp\" (UniqueName: \"kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp\") pod \"63debde5-3369-4cfb-9c82-95690671d24a\" (UID: \"63debde5-3369-4cfb-9c82-95690671d24a\") " Mar 08 03:11:48.535014 master-0 kubenswrapper[7387]: I0308 03:11:48.534826 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "63debde5-3369-4cfb-9c82-95690671d24a" (UID: "63debde5-3369-4cfb-9c82-95690671d24a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:48.535072 master-0 kubenswrapper[7387]: I0308 03:11:48.535023 7387 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.535072 master-0 kubenswrapper[7387]: I0308 03:11:48.535036 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config" (OuterVolumeSpecName: "config") pod "48cb3a00-5875-4d62-8afd-f964c9545c65" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:48.536228 master-0 kubenswrapper[7387]: I0308 03:11:48.536127 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config" (OuterVolumeSpecName: "config") pod "63debde5-3369-4cfb-9c82-95690671d24a" (UID: "63debde5-3369-4cfb-9c82-95690671d24a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:11:48.543190 master-0 kubenswrapper[7387]: I0308 03:11:48.543124 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp" (OuterVolumeSpecName: "kube-api-access-spknp") pod "63debde5-3369-4cfb-9c82-95690671d24a" (UID: "63debde5-3369-4cfb-9c82-95690671d24a"). InnerVolumeSpecName "kube-api-access-spknp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:48.548075 master-0 kubenswrapper[7387]: I0308 03:11:48.547531 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh" (OuterVolumeSpecName: "kube-api-access-7pdnh") pod "48cb3a00-5875-4d62-8afd-f964c9545c65" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65"). InnerVolumeSpecName "kube-api-access-7pdnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:48.550776 master-0 kubenswrapper[7387]: I0308 03:11:48.550573 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48cb3a00-5875-4d62-8afd-f964c9545c65" (UID: "48cb3a00-5875-4d62-8afd-f964c9545c65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:48.563454 master-0 kubenswrapper[7387]: I0308 03:11:48.563394 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63debde5-3369-4cfb-9c82-95690671d24a" (UID: "63debde5-3369-4cfb-9c82-95690671d24a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:11:48.577740 master-0 kubenswrapper[7387]: I0308 03:11:48.576998 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:11:48.577740 master-0 kubenswrapper[7387]: I0308 03:11:48.577668 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.587874 master-0 kubenswrapper[7387]: I0308 03:11:48.587800 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:11:48.635723 master-0 kubenswrapper[7387]: I0308 03:11:48.635676 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spknp\" (UniqueName: \"kubernetes.io/projected/63debde5-3369-4cfb-9c82-95690671d24a-kube-api-access-spknp\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.635723 master-0 kubenswrapper[7387]: I0308 03:11:48.635711 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.635723 master-0 kubenswrapper[7387]: I0308 03:11:48.635725 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48cb3a00-5875-4d62-8afd-f964c9545c65-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.635974 master-0 kubenswrapper[7387]: I0308 03:11:48.635738 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdnh\" (UniqueName: \"kubernetes.io/projected/48cb3a00-5875-4d62-8afd-f964c9545c65-kube-api-access-7pdnh\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.635974 master-0 kubenswrapper[7387]: I0308 03:11:48.635751 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63debde5-3369-4cfb-9c82-95690671d24a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.635974 master-0 kubenswrapper[7387]: I0308 03:11:48.635763 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.738358 master-0 kubenswrapper[7387]: I0308 03:11:48.738318 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75/installer/0.log" Mar 08 03:11:48.738509 master-0 kubenswrapper[7387]: I0308 03:11:48.738379 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:48.739121 master-0 kubenswrapper[7387]: I0308 03:11:48.739077 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.739228 master-0 kubenswrapper[7387]: I0308 03:11:48.739143 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8889r\" (UniqueName: \"kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.739228 master-0 kubenswrapper[7387]: I0308 03:11:48.739169 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.772200 master-0 kubenswrapper[7387]: I0308 03:11:48.772151 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:11:48.772437 master-0 kubenswrapper[7387]: E0308 03:11:48.772326 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" containerName="installer" Mar 08 03:11:48.772437 master-0 kubenswrapper[7387]: I0308 03:11:48.772338 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" containerName="installer" Mar 08 03:11:48.772437 master-0 kubenswrapper[7387]: I0308 03:11:48.772403 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" containerName="installer" Mar 08 03:11:48.773039 master-0 kubenswrapper[7387]: I0308 03:11:48.773016 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:48.784968 master-0 kubenswrapper[7387]: I0308 03:11:48.784929 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:11:48.841475 master-0 kubenswrapper[7387]: I0308 03:11:48.841422 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir\") pod \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " Mar 08 03:11:48.841475 master-0 kubenswrapper[7387]: I0308 03:11:48.841470 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock\") pod \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " Mar 08 03:11:48.841776 master-0 kubenswrapper[7387]: I0308 03:11:48.841568 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access\") pod \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\" (UID: \"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75\") " Mar 08 03:11:48.841776 master-0 kubenswrapper[7387]: I0308 03:11:48.841748 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.841862 master-0 kubenswrapper[7387]: I0308 03:11:48.841780 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8889r\" (UniqueName: \"kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.841862 master-0 kubenswrapper[7387]: I0308 03:11:48.841796 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.842263 master-0 kubenswrapper[7387]: I0308 03:11:48.842232 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.842336 master-0 kubenswrapper[7387]: I0308 03:11:48.842309 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.842336 master-0 kubenswrapper[7387]: I0308 03:11:48.841532 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" (UID: "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:48.842419 master-0 kubenswrapper[7387]: I0308 03:11:48.841589 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock" (OuterVolumeSpecName: "var-lock") pod "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" (UID: "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:11:48.844263 master-0 kubenswrapper[7387]: I0308 03:11:48.844189 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" (UID: "d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:11:48.875401 master-0 kubenswrapper[7387]: I0308 03:11:48.875339 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8889r\" (UniqueName: \"kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r\") pod \"community-operators-bv2v9\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.899185 master-0 kubenswrapper[7387]: I0308 03:11:48.899104 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:11:48.943200 master-0 kubenswrapper[7387]: I0308 03:11:48.943121 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:48.943419 master-0 kubenswrapper[7387]: I0308 03:11:48.943267 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:48.943419 master-0 kubenswrapper[7387]: I0308 03:11:48.943325 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz5v\" (UniqueName: \"kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:48.943556 master-0 kubenswrapper[7387]: I0308 03:11:48.943440 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.943556 master-0 kubenswrapper[7387]: I0308 03:11:48.943458 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:48.943556 master-0 kubenswrapper[7387]: I0308 03:11:48.943473 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:49.045105 master-0 kubenswrapper[7387]: I0308 03:11:49.045040 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.045105 master-0 kubenswrapper[7387]: I0308 03:11:49.045105 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.045701 master-0 kubenswrapper[7387]: I0308 03:11:49.045136 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz5v\" (UniqueName: \"kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.045838 master-0 kubenswrapper[7387]: I0308 03:11:49.045750 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.046007 master-0 kubenswrapper[7387]: I0308 03:11:49.045976 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.067699 master-0 kubenswrapper[7387]: I0308 03:11:49.067038 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz5v\" (UniqueName: \"kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v\") pod \"certified-operators-l2dj4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.109601 master-0 kubenswrapper[7387]: I0308 03:11:49.109538 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:11:49.297245 master-0 kubenswrapper[7387]: I0308 03:11:49.297155 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:11:49.384833 master-0 kubenswrapper[7387]: I0308 03:11:49.384755 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerStarted","Data":"eb6a0fa697f07bd8b4258d861bc42d4dd0bded85d64bcf04e5a347df7ac607d8"} Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.395774 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75/installer/0.log" Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.395831 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" containerID="ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682" exitCode=1 Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.396534 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.396625 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75","Type":"ContainerDied","Data":"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682"} Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.396683 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75","Type":"ContainerDied","Data":"ab218e481e6b65c60b8d01ae90ba379f9494fedc6779f71bcb8886d790d6b966"} Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.396702 7387 scope.go:117] "RemoveContainer" containerID="ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682" Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.398492 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm" Mar 08 03:11:49.402410 master-0 kubenswrapper[7387]: I0308 03:11:49.398813 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-758ff9f665-bmgpk" Mar 08 03:11:49.427329 master-0 kubenswrapper[7387]: I0308 03:11:49.427251 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw"] Mar 08 03:11:49.432184 master-0 kubenswrapper[7387]: I0308 03:11:49.432091 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.439008 master-0 kubenswrapper[7387]: I0308 03:11:49.438397 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 03:11:49.443022 master-0 kubenswrapper[7387]: I0308 03:11:49.442076 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw"] Mar 08 03:11:49.446307 master-0 kubenswrapper[7387]: I0308 03:11:49.444668 7387 scope.go:117] "RemoveContainer" containerID="ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682" Mar 08 03:11:49.446971 master-0 kubenswrapper[7387]: E0308 03:11:49.446659 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682\": container with ID starting with ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682 not found: ID does not exist" containerID="ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682" Mar 08 03:11:49.446971 master-0 kubenswrapper[7387]: I0308 03:11:49.446776 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682"} err="failed to get container status \"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682\": rpc error: code = NotFound desc = could not find container \"ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682\": container with ID starting with ffb6f8fa97406fdc4a2f646861c32438f691b60c7a72b4ca039b272eae00c682 not found: ID does not exist" Mar 08 03:11:49.515971 master-0 kubenswrapper[7387]: I0308 03:11:49.515926 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-758ff9f665-bmgpk"] Mar 08 03:11:49.519458 master-0 kubenswrapper[7387]: I0308 03:11:49.519435 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-758ff9f665-bmgpk"] Mar 08 03:11:49.532035 master-0 kubenswrapper[7387]: I0308 03:11:49.530870 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:49.533988 master-0 kubenswrapper[7387]: I0308 03:11:49.533600 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 08 03:11:49.557551 master-0 kubenswrapper[7387]: I0308 03:11:49.557234 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.557551 master-0 kubenswrapper[7387]: I0308 03:11:49.557295 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkp89\" (UniqueName: \"kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.557551 master-0 kubenswrapper[7387]: I0308 03:11:49.557349 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.557551 master-0 kubenswrapper[7387]: I0308 03:11:49.557429 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.565353 master-0 kubenswrapper[7387]: I0308 03:11:49.565290 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:11:49.567454 master-0 kubenswrapper[7387]: I0308 03:11:49.566960 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm"] Mar 08 03:11:49.568589 master-0 kubenswrapper[7387]: I0308 03:11:49.568313 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff96cfc69-gqmqm"] Mar 08 03:11:49.658554 master-0 kubenswrapper[7387]: I0308 03:11:49.658499 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.658753 master-0 kubenswrapper[7387]: I0308 03:11:49.658608 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.658753 master-0 kubenswrapper[7387]: I0308 03:11:49.658643 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.658753 master-0 kubenswrapper[7387]: I0308 03:11:49.658675 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkp89\" (UniqueName: \"kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.658753 master-0 kubenswrapper[7387]: I0308 03:11:49.658718 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63debde5-3369-4cfb-9c82-95690671d24a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:49.659698 master-0 kubenswrapper[7387]: I0308 03:11:49.659643 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.663597 master-0 kubenswrapper[7387]: I0308 03:11:49.663562 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.663817 master-0 kubenswrapper[7387]: I0308 03:11:49.663777 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.675316 master-0 kubenswrapper[7387]: I0308 03:11:49.675268 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkp89\" (UniqueName: \"kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:49.759755 master-0 kubenswrapper[7387]: I0308 03:11:49.759685 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48cb3a00-5875-4d62-8afd-f964c9545c65-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:11:49.765072 master-0 kubenswrapper[7387]: I0308 03:11:49.765029 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48cb3a00-5875-4d62-8afd-f964c9545c65" path="/var/lib/kubelet/pods/48cb3a00-5875-4d62-8afd-f964c9545c65/volumes" Mar 08 03:11:49.765405 master-0 kubenswrapper[7387]: I0308 03:11:49.765380 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63debde5-3369-4cfb-9c82-95690671d24a" path="/var/lib/kubelet/pods/63debde5-3369-4cfb-9c82-95690671d24a/volumes" Mar 08 03:11:49.765670 master-0 kubenswrapper[7387]: I0308 03:11:49.765645 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75" path="/var/lib/kubelet/pods/d5b0bb96-9fcd-426d-abb7-aa3ec6bcbb75/volumes" Mar 08 03:11:49.785120 master-0 kubenswrapper[7387]: I0308 03:11:49.785006 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:50.166625 master-0 kubenswrapper[7387]: I0308 03:11:50.166515 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:11:50.172339 master-0 kubenswrapper[7387]: I0308 03:11:50.172291 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.172510 master-0 kubenswrapper[7387]: I0308 03:11:50.172346 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.172510 master-0 kubenswrapper[7387]: I0308 03:11:50.172371 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.172510 master-0 kubenswrapper[7387]: I0308 03:11:50.172399 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.172510 master-0 kubenswrapper[7387]: I0308 03:11:50.172424 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4np7\" (UniqueName: \"kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.172654 master-0 kubenswrapper[7387]: I0308 03:11:50.172605 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.183392 master-0 kubenswrapper[7387]: I0308 03:11:50.183322 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:11:50.183576 master-0 kubenswrapper[7387]: I0308 03:11:50.183476 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:50.184229 master-0 kubenswrapper[7387]: I0308 03:11:50.184201 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:50.184359 master-0 kubenswrapper[7387]: I0308 03:11:50.184341 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:11:50.184608 master-0 kubenswrapper[7387]: I0308 03:11:50.184563 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:11:50.216114 master-0 kubenswrapper[7387]: I0308 03:11:50.216042 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:11:50.217134 master-0 kubenswrapper[7387]: I0308 03:11:50.217098 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:11:50.217821 master-0 kubenswrapper[7387]: I0308 03:11:50.217797 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.219821 master-0 kubenswrapper[7387]: I0308 03:11:50.219776 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:11:50.220290 master-0 kubenswrapper[7387]: I0308 03:11:50.220267 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:11:50.220329 master-0 kubenswrapper[7387]: I0308 03:11:50.220268 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:11:50.220489 master-0 kubenswrapper[7387]: I0308 03:11:50.220468 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:11:50.220680 master-0 kubenswrapper[7387]: I0308 03:11:50.220661 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:11:50.220810 master-0 kubenswrapper[7387]: I0308 03:11:50.220783 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.222434 master-0 kubenswrapper[7387]: I0308 03:11:50.221820 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:11:50.232030 master-0 kubenswrapper[7387]: I0308 03:11:50.231990 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:11:50.234226 master-0 kubenswrapper[7387]: I0308 03:11:50.234179 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:11:50.248447 master-0 kubenswrapper[7387]: I0308 03:11:50.248405 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:11:50.259203 master-0 kubenswrapper[7387]: I0308 03:11:50.258465 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw"] Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: W0308 03:11:50.273163 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a1b7b0d_6e00_485e_86e8_7bd047569328.slice/crio-b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8 WatchSource:0}: Error finding container b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8: Status 404 returned error can't find the container with id b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8 Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273311 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273353 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273382 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273398 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv4f8\" (UniqueName: \"kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273477 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273546 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273590 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273625 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.273689 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4np7\" (UniqueName: \"kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.275524 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.277826 master-0 kubenswrapper[7387]: I0308 03:11:50.275618 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.282590 master-0 kubenswrapper[7387]: I0308 03:11:50.278144 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.289335 master-0 kubenswrapper[7387]: I0308 03:11:50.289279 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.297842 master-0 kubenswrapper[7387]: I0308 03:11:50.297789 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4np7\" (UniqueName: \"kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7\") pod \"controller-manager-77c5c9d7dd-xtftv\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.374490 master-0 kubenswrapper[7387]: I0308 03:11:50.374450 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.374699 master-0 kubenswrapper[7387]: I0308 03:11:50.374685 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.374794 master-0 kubenswrapper[7387]: I0308 03:11:50.374781 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnzvf\" (UniqueName: \"kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.374873 master-0 kubenswrapper[7387]: I0308 03:11:50.374861 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.375001 master-0 kubenswrapper[7387]: I0308 03:11:50.374986 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.375073 master-0 kubenswrapper[7387]: I0308 03:11:50.375061 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv4f8\" (UniqueName: \"kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.375168 master-0 kubenswrapper[7387]: I0308 03:11:50.375156 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.375927 master-0 kubenswrapper[7387]: I0308 03:11:50.375869 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.376175 master-0 kubenswrapper[7387]: I0308 03:11:50.376160 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.379505 master-0 kubenswrapper[7387]: I0308 03:11:50.379462 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.394951 master-0 kubenswrapper[7387]: I0308 03:11:50.394890 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv4f8\" (UniqueName: \"kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8\") pod \"route-controller-manager-8c4996cd4-qsvqj\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.405448 master-0 kubenswrapper[7387]: I0308 03:11:50.405410 7387 generic.go:334] "Generic (PLEG): container finished" podID="10895809-a444-42ec-a41f-111e17f6beb3" containerID="559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265" exitCode=0 Mar 08 03:11:50.405543 master-0 kubenswrapper[7387]: I0308 03:11:50.405487 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerDied","Data":"559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265"} Mar 08 03:11:50.413807 master-0 kubenswrapper[7387]: I0308 03:11:50.413762 7387 generic.go:334] "Generic (PLEG): container finished" podID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerID="7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153" exitCode=0 Mar 08 03:11:50.413995 master-0 kubenswrapper[7387]: I0308 03:11:50.413876 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerDied","Data":"7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153"} Mar 08 03:11:50.413995 master-0 kubenswrapper[7387]: I0308 03:11:50.413944 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerStarted","Data":"bf1527e18b5a86e91a809b4f5d095a7a82806a089dab98ff084c268db6ce9db6"} Mar 08 03:11:50.418358 master-0 kubenswrapper[7387]: I0308 03:11:50.418289 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" event={"ID":"7a1b7b0d-6e00-485e-86e8-7bd047569328","Type":"ContainerStarted","Data":"b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8"} Mar 08 03:11:50.476304 master-0 kubenswrapper[7387]: I0308 03:11:50.476252 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.476775 master-0 kubenswrapper[7387]: I0308 03:11:50.476726 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.476871 master-0 kubenswrapper[7387]: I0308 03:11:50.476836 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnzvf\" (UniqueName: \"kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.477388 master-0 kubenswrapper[7387]: I0308 03:11:50.477362 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.477452 master-0 kubenswrapper[7387]: I0308 03:11:50.477422 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.493764 master-0 kubenswrapper[7387]: I0308 03:11:50.493718 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnzvf\" (UniqueName: \"kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf\") pod \"redhat-marketplace-qwkmn\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:50.564763 master-0 kubenswrapper[7387]: I0308 03:11:50.564693 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:50.614422 master-0 kubenswrapper[7387]: I0308 03:11:50.613973 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:50.648640 master-0 kubenswrapper[7387]: I0308 03:11:50.648597 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:11:51.010207 master-0 kubenswrapper[7387]: I0308 03:11:51.010156 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:11:51.024160 master-0 kubenswrapper[7387]: W0308 03:11:51.024107 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd1c09ba_b44c_446a_abe0_53ac3e910a77.slice/crio-187df35e7836b813c131539b8b3d9d53cf0016c310d2d5141489db5ae6ac75e3 WatchSource:0}: Error finding container 187df35e7836b813c131539b8b3d9d53cf0016c310d2d5141489db5ae6ac75e3: Status 404 returned error can't find the container with id 187df35e7836b813c131539b8b3d9d53cf0016c310d2d5141489db5ae6ac75e3 Mar 08 03:11:51.105125 master-0 kubenswrapper[7387]: I0308 03:11:51.105069 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:11:51.117407 master-0 kubenswrapper[7387]: W0308 03:11:51.113106 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2495994_736c_4916_b210_ff5633f3387d.slice/crio-0f031beb71b55f3d5cf502aa52b29fda44b26c543b17c2ed8446cc613eb9a37c WatchSource:0}: Error finding container 0f031beb71b55f3d5cf502aa52b29fda44b26c543b17c2ed8446cc613eb9a37c: Status 404 returned error can't find the container with id 0f031beb71b55f3d5cf502aa52b29fda44b26c543b17c2ed8446cc613eb9a37c Mar 08 03:11:51.175961 master-0 kubenswrapper[7387]: I0308 03:11:51.175738 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:11:51.176701 master-0 kubenswrapper[7387]: I0308 03:11:51.176433 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.183931 master-0 kubenswrapper[7387]: I0308 03:11:51.183860 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.184063 master-0 kubenswrapper[7387]: I0308 03:11:51.183957 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.184210 master-0 kubenswrapper[7387]: I0308 03:11:51.184152 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.190774 master-0 kubenswrapper[7387]: I0308 03:11:51.190732 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:11:51.313313 master-0 kubenswrapper[7387]: I0308 03:11:51.313068 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.313313 master-0 kubenswrapper[7387]: I0308 03:11:51.313180 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.313313 master-0 kubenswrapper[7387]: I0308 03:11:51.313206 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.313313 master-0 kubenswrapper[7387]: I0308 03:11:51.313321 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.313656 master-0 kubenswrapper[7387]: I0308 03:11:51.313592 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.340830 master-0 kubenswrapper[7387]: I0308 03:11:51.340751 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access\") pod \"installer-3-master-0\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.347493 master-0 kubenswrapper[7387]: I0308 03:11:51.347371 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:11:51.379522 master-0 kubenswrapper[7387]: I0308 03:11:51.379450 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:11:51.386232 master-0 kubenswrapper[7387]: I0308 03:11:51.380884 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.391154 master-0 kubenswrapper[7387]: I0308 03:11:51.389878 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:11:51.427570 master-0 kubenswrapper[7387]: I0308 03:11:51.427516 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerStarted","Data":"0f031beb71b55f3d5cf502aa52b29fda44b26c543b17c2ed8446cc613eb9a37c"} Mar 08 03:11:51.428598 master-0 kubenswrapper[7387]: I0308 03:11:51.428563 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerStarted","Data":"8caa1b5d7d43482e6821d9a8a466129706ff3cba15e380b7649182b138c2cbdd"} Mar 08 03:11:51.429742 master-0 kubenswrapper[7387]: I0308 03:11:51.429708 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerStarted","Data":"187df35e7836b813c131539b8b3d9d53cf0016c310d2d5141489db5ae6ac75e3"} Mar 08 03:11:51.431243 master-0 kubenswrapper[7387]: I0308 03:11:51.431214 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" event={"ID":"7a1b7b0d-6e00-485e-86e8-7bd047569328","Type":"ContainerStarted","Data":"651b89d40d64bc2e6248b5339db69fe39bab8dd0a6c770a59895277ebbd4cbaf"} Mar 08 03:11:51.432494 master-0 kubenswrapper[7387]: I0308 03:11:51.432368 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:51.438826 master-0 kubenswrapper[7387]: I0308 03:11:51.438738 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:11:51.480582 master-0 kubenswrapper[7387]: I0308 03:11:51.480529 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" podStartSLOduration=2.480514703 podStartE2EDuration="2.480514703s" podCreationTimestamp="2026-03-08 03:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:51.453727158 +0000 UTC m=+47.848202839" watchObservedRunningTime="2026-03-08 03:11:51.480514703 +0000 UTC m=+47.874990384" Mar 08 03:11:51.495804 master-0 kubenswrapper[7387]: I0308 03:11:51.495770 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:11:51.516716 master-0 kubenswrapper[7387]: I0308 03:11:51.516686 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cctj6\" (UniqueName: \"kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.517016 master-0 kubenswrapper[7387]: I0308 03:11:51.516999 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.517103 master-0 kubenswrapper[7387]: I0308 03:11:51.517092 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.619970 master-0 kubenswrapper[7387]: I0308 03:11:51.618921 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cctj6\" (UniqueName: \"kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.619970 master-0 kubenswrapper[7387]: I0308 03:11:51.619027 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.619970 master-0 kubenswrapper[7387]: I0308 03:11:51.619048 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.619970 master-0 kubenswrapper[7387]: I0308 03:11:51.619413 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.621522 master-0 kubenswrapper[7387]: I0308 03:11:51.620375 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.637602 master-0 kubenswrapper[7387]: I0308 03:11:51.637443 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cctj6\" (UniqueName: \"kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6\") pod \"redhat-operators-ljh97\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.700864 master-0 kubenswrapper[7387]: I0308 03:11:51.700686 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:11:51.884167 master-0 kubenswrapper[7387]: I0308 03:11:51.884109 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:11:51.889419 master-0 kubenswrapper[7387]: W0308 03:11:51.889373 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd9732f3d_49d0_4400_ab54_ce029c49ec37.slice/crio-e3a8c666cc4f4a7c253a9025cc1b3ba0786a7df86504493f9d0a011c9711326c WatchSource:0}: Error finding container e3a8c666cc4f4a7c253a9025cc1b3ba0786a7df86504493f9d0a011c9711326c: Status 404 returned error can't find the container with id e3a8c666cc4f4a7c253a9025cc1b3ba0786a7df86504493f9d0a011c9711326c Mar 08 03:11:52.093180 master-0 kubenswrapper[7387]: I0308 03:11:52.093105 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:11:52.101863 master-0 kubenswrapper[7387]: W0308 03:11:52.101799 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df5a48e_425c_443e_bfdf_6d57fe1e4638.slice/crio-d3f47f44b3c84618239ebe3bfe7bf4d1b33e913e345dd91f4e5f2389d83afc0e WatchSource:0}: Error finding container d3f47f44b3c84618239ebe3bfe7bf4d1b33e913e345dd91f4e5f2389d83afc0e: Status 404 returned error can't find the container with id d3f47f44b3c84618239ebe3bfe7bf4d1b33e913e345dd91f4e5f2389d83afc0e Mar 08 03:11:52.436422 master-0 kubenswrapper[7387]: I0308 03:11:52.436299 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d9732f3d-49d0-4400-ab54-ce029c49ec37","Type":"ContainerStarted","Data":"f18cd7eda60c407478fca3d541ef7c3b485f22910138d1711b3dcb271a68466c"} Mar 08 03:11:52.436422 master-0 kubenswrapper[7387]: I0308 03:11:52.436338 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d9732f3d-49d0-4400-ab54-ce029c49ec37","Type":"ContainerStarted","Data":"e3a8c666cc4f4a7c253a9025cc1b3ba0786a7df86504493f9d0a011c9711326c"} Mar 08 03:11:52.437670 master-0 kubenswrapper[7387]: I0308 03:11:52.437641 7387 generic.go:334] "Generic (PLEG): container finished" podID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerID="4864123e35280779c7eb88b414c99a6dc86b1ee4312ab819168cc4c3fb25d713" exitCode=0 Mar 08 03:11:52.437777 master-0 kubenswrapper[7387]: I0308 03:11:52.437682 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerDied","Data":"4864123e35280779c7eb88b414c99a6dc86b1ee4312ab819168cc4c3fb25d713"} Mar 08 03:11:52.441587 master-0 kubenswrapper[7387]: I0308 03:11:52.441531 7387 generic.go:334] "Generic (PLEG): container finished" podID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerID="b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7" exitCode=0 Mar 08 03:11:52.442323 master-0 kubenswrapper[7387]: I0308 03:11:52.442285 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerDied","Data":"b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7"} Mar 08 03:11:52.442323 master-0 kubenswrapper[7387]: I0308 03:11:52.442314 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerStarted","Data":"d3f47f44b3c84618239ebe3bfe7bf4d1b33e913e345dd91f4e5f2389d83afc0e"} Mar 08 03:11:52.455151 master-0 kubenswrapper[7387]: I0308 03:11:52.455074 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=1.455050398 podStartE2EDuration="1.455050398s" podCreationTimestamp="2026-03-08 03:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:52.453419485 +0000 UTC m=+48.847895166" watchObservedRunningTime="2026-03-08 03:11:52.455050398 +0000 UTC m=+48.849526079" Mar 08 03:11:54.991699 master-0 kubenswrapper[7387]: I0308 03:11:54.991556 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 03:11:54.992205 master-0 kubenswrapper[7387]: I0308 03:11:54.992148 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:54.995681 master-0 kubenswrapper[7387]: I0308 03:11:54.995635 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:11:55.003460 master-0 kubenswrapper[7387]: I0308 03:11:55.003410 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 03:11:55.192398 master-0 kubenswrapper[7387]: I0308 03:11:55.192337 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.192398 master-0 kubenswrapper[7387]: I0308 03:11:55.192402 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.192398 master-0 kubenswrapper[7387]: I0308 03:11:55.192424 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.294114 master-0 kubenswrapper[7387]: I0308 03:11:55.294049 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.294295 master-0 kubenswrapper[7387]: I0308 03:11:55.294126 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.294295 master-0 kubenswrapper[7387]: I0308 03:11:55.294146 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.294295 master-0 kubenswrapper[7387]: I0308 03:11:55.294266 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.294395 master-0 kubenswrapper[7387]: I0308 03:11:55.294299 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.314174 master-0 kubenswrapper[7387]: I0308 03:11:55.312823 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:55.336725 master-0 kubenswrapper[7387]: I0308 03:11:55.336466 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:11:58.049298 master-0 kubenswrapper[7387]: I0308 03:11:58.049249 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:11:58.049999 master-0 kubenswrapper[7387]: I0308 03:11:58.049426 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" containerName="installer" containerID="cri-o://f18cd7eda60c407478fca3d541ef7c3b485f22910138d1711b3dcb271a68466c" gracePeriod=30 Mar 08 03:11:58.255029 master-0 kubenswrapper[7387]: I0308 03:11:58.254936 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 08 03:11:58.482216 master-0 kubenswrapper[7387]: I0308 03:11:58.482169 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b","Type":"ContainerStarted","Data":"af1629d870a431db24e184fef7d2d042da3102cfaa950212d16542cff7e837ad"} Mar 08 03:11:58.484335 master-0 kubenswrapper[7387]: I0308 03:11:58.484291 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_d9732f3d-49d0-4400-ab54-ce029c49ec37/installer/0.log" Mar 08 03:11:58.484428 master-0 kubenswrapper[7387]: I0308 03:11:58.484343 7387 generic.go:334] "Generic (PLEG): container finished" podID="d9732f3d-49d0-4400-ab54-ce029c49ec37" containerID="f18cd7eda60c407478fca3d541ef7c3b485f22910138d1711b3dcb271a68466c" exitCode=1 Mar 08 03:11:58.484428 master-0 kubenswrapper[7387]: I0308 03:11:58.484380 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d9732f3d-49d0-4400-ab54-ce029c49ec37","Type":"ContainerDied","Data":"f18cd7eda60c407478fca3d541ef7c3b485f22910138d1711b3dcb271a68466c"} Mar 08 03:11:58.488272 master-0 kubenswrapper[7387]: I0308 03:11:58.487831 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerStarted","Data":"d89cedfa5c6dd99c3607e2b41fd1a5a7721d2add34c9b3bd4ddfc268530aeaaf"} Mar 08 03:11:58.488382 master-0 kubenswrapper[7387]: I0308 03:11:58.488320 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:58.503851 master-0 kubenswrapper[7387]: I0308 03:11:58.502873 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:11:58.540958 master-0 kubenswrapper[7387]: I0308 03:11:58.539470 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podStartSLOduration=3.889928352 podStartE2EDuration="10.539438097s" podCreationTimestamp="2026-03-08 03:11:48 +0000 UTC" firstStartedPulling="2026-03-08 03:11:51.114683602 +0000 UTC m=+47.509159283" lastFinishedPulling="2026-03-08 03:11:57.764193347 +0000 UTC m=+54.158669028" observedRunningTime="2026-03-08 03:11:58.52055781 +0000 UTC m=+54.915033491" watchObservedRunningTime="2026-03-08 03:11:58.539438097 +0000 UTC m=+54.933913778" Mar 08 03:11:59.505087 master-0 kubenswrapper[7387]: I0308 03:11:59.504310 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerStarted","Data":"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc"} Mar 08 03:11:59.506163 master-0 kubenswrapper[7387]: I0308 03:11:59.505453 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:59.511861 master-0 kubenswrapper[7387]: I0308 03:11:59.511827 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b","Type":"ContainerStarted","Data":"2569a7eccce46264a4c7e0024d1b136ccb829cb434ec57e4613d364f065d0db9"} Mar 08 03:11:59.513222 master-0 kubenswrapper[7387]: I0308 03:11:59.513194 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:11:59.535714 master-0 kubenswrapper[7387]: I0308 03:11:59.535571 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podStartSLOduration=4.223662865 podStartE2EDuration="11.535048646s" podCreationTimestamp="2026-03-08 03:11:48 +0000 UTC" firstStartedPulling="2026-03-08 03:11:51.032084747 +0000 UTC m=+47.426560428" lastFinishedPulling="2026-03-08 03:11:58.343470538 +0000 UTC m=+54.737946209" observedRunningTime="2026-03-08 03:11:59.530363223 +0000 UTC m=+55.924838904" watchObservedRunningTime="2026-03-08 03:11:59.535048646 +0000 UTC m=+55.929524327" Mar 08 03:11:59.621075 master-0 kubenswrapper[7387]: I0308 03:11:59.615553 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=5.615533095 podStartE2EDuration="5.615533095s" podCreationTimestamp="2026-03-08 03:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:11:59.60240748 +0000 UTC m=+55.996883171" watchObservedRunningTime="2026-03-08 03:11:59.615533095 +0000 UTC m=+56.010008776" Mar 08 03:11:59.851447 master-0 kubenswrapper[7387]: I0308 03:11:59.851318 7387 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 03:11:59.851709 master-0 kubenswrapper[7387]: I0308 03:11:59.851658 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56" gracePeriod=30 Mar 08 03:11:59.851782 master-0 kubenswrapper[7387]: I0308 03:11:59.851700 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10" gracePeriod=30 Mar 08 03:11:59.853839 master-0 kubenswrapper[7387]: I0308 03:11:59.853803 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:11:59.854066 master-0 kubenswrapper[7387]: E0308 03:11:59.854028 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 03:11:59.854115 master-0 kubenswrapper[7387]: I0308 03:11:59.854066 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 03:11:59.854115 master-0 kubenswrapper[7387]: E0308 03:11:59.854087 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 03:11:59.854115 master-0 kubenswrapper[7387]: I0308 03:11:59.854095 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 03:11:59.854235 master-0 kubenswrapper[7387]: I0308 03:11:59.854216 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 08 03:11:59.854235 master-0 kubenswrapper[7387]: I0308 03:11:59.854232 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 08 03:11:59.855802 master-0 kubenswrapper[7387]: I0308 03:11:59.855773 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983345 master-0 kubenswrapper[7387]: I0308 03:11:59.983291 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983543 master-0 kubenswrapper[7387]: I0308 03:11:59.983366 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983543 master-0 kubenswrapper[7387]: I0308 03:11:59.983396 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983543 master-0 kubenswrapper[7387]: I0308 03:11:59.983414 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983543 master-0 kubenswrapper[7387]: I0308 03:11:59.983475 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:11:59.983543 master-0 kubenswrapper[7387]: I0308 03:11:59.983500 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085082 master-0 kubenswrapper[7387]: I0308 03:12:00.085019 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085280 master-0 kubenswrapper[7387]: I0308 03:12:00.085127 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085280 master-0 kubenswrapper[7387]: I0308 03:12:00.085252 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085368 master-0 kubenswrapper[7387]: I0308 03:12:00.085317 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085368 master-0 kubenswrapper[7387]: I0308 03:12:00.085357 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085455 master-0 kubenswrapper[7387]: I0308 03:12:00.085429 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085512 master-0 kubenswrapper[7387]: I0308 03:12:00.085483 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085558 master-0 kubenswrapper[7387]: I0308 03:12:00.085547 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085613 master-0 kubenswrapper[7387]: I0308 03:12:00.085594 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085717 master-0 kubenswrapper[7387]: I0308 03:12:00.085689 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085762 master-0 kubenswrapper[7387]: I0308 03:12:00.085721 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:00.085762 master-0 kubenswrapper[7387]: I0308 03:12:00.085748 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:12:09.568299 master-0 kubenswrapper[7387]: I0308 03:12:09.568170 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_8b8c5365-e7a0-4f69-913f-2e12b142e4a5/installer/0.log" Mar 08 03:12:09.568299 master-0 kubenswrapper[7387]: I0308 03:12:09.568259 7387 generic.go:334] "Generic (PLEG): container finished" podID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" containerID="2c219d2ffed7988b04169d2e3c20b8b683dd3d20eb4e97983e2ec6007ff4233d" exitCode=1 Mar 08 03:12:09.568299 master-0 kubenswrapper[7387]: I0308 03:12:09.568302 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8b8c5365-e7a0-4f69-913f-2e12b142e4a5","Type":"ContainerDied","Data":"2c219d2ffed7988b04169d2e3c20b8b683dd3d20eb4e97983e2ec6007ff4233d"} Mar 08 03:12:10.078022 master-0 kubenswrapper[7387]: I0308 03:12:10.076532 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_d9732f3d-49d0-4400-ab54-ce029c49ec37/installer/0.log" Mar 08 03:12:10.078022 master-0 kubenswrapper[7387]: I0308 03:12:10.076668 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222178 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock\") pod \"d9732f3d-49d0-4400-ab54-ce029c49ec37\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222271 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock" (OuterVolumeSpecName: "var-lock") pod "d9732f3d-49d0-4400-ab54-ce029c49ec37" (UID: "d9732f3d-49d0-4400-ab54-ce029c49ec37"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222322 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir\") pod \"d9732f3d-49d0-4400-ab54-ce029c49ec37\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222380 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access\") pod \"d9732f3d-49d0-4400-ab54-ce029c49ec37\" (UID: \"d9732f3d-49d0-4400-ab54-ce029c49ec37\") " Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222411 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d9732f3d-49d0-4400-ab54-ce029c49ec37" (UID: "d9732f3d-49d0-4400-ab54-ce029c49ec37"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222594 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:10.222636 master-0 kubenswrapper[7387]: I0308 03:12:10.222610 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9732f3d-49d0-4400-ab54-ce029c49ec37-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:10.228848 master-0 kubenswrapper[7387]: I0308 03:12:10.228801 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d9732f3d-49d0-4400-ab54-ce029c49ec37" (UID: "d9732f3d-49d0-4400-ab54-ce029c49ec37"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:12:10.333642 master-0 kubenswrapper[7387]: I0308 03:12:10.333512 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d9732f3d-49d0-4400-ab54-ce029c49ec37-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:10.573519 master-0 kubenswrapper[7387]: I0308 03:12:10.573484 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_d9732f3d-49d0-4400-ab54-ce029c49ec37/installer/0.log" Mar 08 03:12:10.573519 master-0 kubenswrapper[7387]: I0308 03:12:10.573527 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"d9732f3d-49d0-4400-ab54-ce029c49ec37","Type":"ContainerDied","Data":"e3a8c666cc4f4a7c253a9025cc1b3ba0786a7df86504493f9d0a011c9711326c"} Mar 08 03:12:10.574071 master-0 kubenswrapper[7387]: I0308 03:12:10.573562 7387 scope.go:117] "RemoveContainer" containerID="f18cd7eda60c407478fca3d541ef7c3b485f22910138d1711b3dcb271a68466c" Mar 08 03:12:10.574071 master-0 kubenswrapper[7387]: I0308 03:12:10.573645 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 08 03:12:12.485095 master-0 kubenswrapper[7387]: I0308 03:12:12.485040 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_8b8c5365-e7a0-4f69-913f-2e12b142e4a5/installer/0.log" Mar 08 03:12:12.485095 master-0 kubenswrapper[7387]: I0308 03:12:12.485101 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:12:12.594580 master-0 kubenswrapper[7387]: I0308 03:12:12.594389 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2" exitCode=1 Mar 08 03:12:12.594580 master-0 kubenswrapper[7387]: I0308 03:12:12.594499 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2"} Mar 08 03:12:12.595371 master-0 kubenswrapper[7387]: I0308 03:12:12.595304 7387 scope.go:117] "RemoveContainer" containerID="f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2" Mar 08 03:12:12.597557 master-0 kubenswrapper[7387]: I0308 03:12:12.597062 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"8b8c5365-e7a0-4f69-913f-2e12b142e4a5","Type":"ContainerDied","Data":"66dc9b6e365401bbecd33295a9a91f35bfb68325d8da1da36b865bca1ae7caa4"} Mar 08 03:12:12.597557 master-0 kubenswrapper[7387]: I0308 03:12:12.597117 7387 scope.go:117] "RemoveContainer" containerID="2c219d2ffed7988b04169d2e3c20b8b683dd3d20eb4e97983e2ec6007ff4233d" Mar 08 03:12:12.597557 master-0 kubenswrapper[7387]: I0308 03:12:12.597176 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 08 03:12:12.663424 master-0 kubenswrapper[7387]: I0308 03:12:12.663359 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir\") pod \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " Mar 08 03:12:12.663755 master-0 kubenswrapper[7387]: I0308 03:12:12.663499 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock\") pod \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " Mar 08 03:12:12.663755 master-0 kubenswrapper[7387]: I0308 03:12:12.663544 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access\") pod \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\" (UID: \"8b8c5365-e7a0-4f69-913f-2e12b142e4a5\") " Mar 08 03:12:12.664723 master-0 kubenswrapper[7387]: I0308 03:12:12.664401 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b8c5365-e7a0-4f69-913f-2e12b142e4a5" (UID: "8b8c5365-e7a0-4f69-913f-2e12b142e4a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:12.664723 master-0 kubenswrapper[7387]: I0308 03:12:12.664475 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock" (OuterVolumeSpecName: "var-lock") pod "8b8c5365-e7a0-4f69-913f-2e12b142e4a5" (UID: "8b8c5365-e7a0-4f69-913f-2e12b142e4a5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:12.672636 master-0 kubenswrapper[7387]: I0308 03:12:12.671943 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b8c5365-e7a0-4f69-913f-2e12b142e4a5" (UID: "8b8c5365-e7a0-4f69-913f-2e12b142e4a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:12:12.765932 master-0 kubenswrapper[7387]: I0308 03:12:12.764414 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:12.765932 master-0 kubenswrapper[7387]: I0308 03:12:12.764464 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:12.765932 master-0 kubenswrapper[7387]: I0308 03:12:12.764480 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b8c5365-e7a0-4f69-913f-2e12b142e4a5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:13.319547 master-0 kubenswrapper[7387]: I0308 03:12:13.319507 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:12:13.607005 master-0 kubenswrapper[7387]: I0308 03:12:13.606799 7387 generic.go:334] "Generic (PLEG): container finished" podID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerID="df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70" exitCode=0 Mar 08 03:12:13.607005 master-0 kubenswrapper[7387]: I0308 03:12:13.606978 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerDied","Data":"df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70"} Mar 08 03:12:13.610968 master-0 kubenswrapper[7387]: I0308 03:12:13.610861 7387 generic.go:334] "Generic (PLEG): container finished" podID="10895809-a444-42ec-a41f-111e17f6beb3" containerID="13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2" exitCode=0 Mar 08 03:12:13.610968 master-0 kubenswrapper[7387]: I0308 03:12:13.610940 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerDied","Data":"13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2"} Mar 08 03:12:13.616497 master-0 kubenswrapper[7387]: I0308 03:12:13.616433 7387 generic.go:334] "Generic (PLEG): container finished" podID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerID="298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03" exitCode=0 Mar 08 03:12:13.616618 master-0 kubenswrapper[7387]: I0308 03:12:13.616566 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerDied","Data":"298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03"} Mar 08 03:12:13.619964 master-0 kubenswrapper[7387]: I0308 03:12:13.619851 7387 generic.go:334] "Generic (PLEG): container finished" podID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerID="4d94dea428b3bf85791a0b8f028285c48bd5213ac70429f60380a516057a75ed" exitCode=0 Mar 08 03:12:13.620112 master-0 kubenswrapper[7387]: I0308 03:12:13.620041 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerDied","Data":"4d94dea428b3bf85791a0b8f028285c48bd5213ac70429f60380a516057a75ed"} Mar 08 03:12:13.625867 master-0 kubenswrapper[7387]: I0308 03:12:13.625801 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a"} Mar 08 03:12:13.739436 master-0 kubenswrapper[7387]: I0308 03:12:13.739324 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:12:14.633958 master-0 kubenswrapper[7387]: I0308 03:12:14.633879 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerStarted","Data":"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825"} Mar 08 03:12:14.637096 master-0 kubenswrapper[7387]: I0308 03:12:14.637040 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerStarted","Data":"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4"} Mar 08 03:12:14.638652 master-0 kubenswrapper[7387]: I0308 03:12:14.638624 7387 generic.go:334] "Generic (PLEG): container finished" podID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerID="d2e9db5795871d92c7d2a7895a4e9d84c621a83e058c0b33df388b4e6b8eebdb" exitCode=0 Mar 08 03:12:14.638806 master-0 kubenswrapper[7387]: I0308 03:12:14.638755 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"ed2e0194-6b50-4478-aba4-21193d2c18aa","Type":"ContainerDied","Data":"d2e9db5795871d92c7d2a7895a4e9d84c621a83e058c0b33df388b4e6b8eebdb"} Mar 08 03:12:14.642053 master-0 kubenswrapper[7387]: I0308 03:12:14.641260 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerStarted","Data":"7d1e117d0ec451a4b1cba8ab16163f6c71cff1fb505fc4820a69f5c053ccc5d7"} Mar 08 03:12:14.647192 master-0 kubenswrapper[7387]: I0308 03:12:14.647148 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerStarted","Data":"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04"} Mar 08 03:12:15.656802 master-0 kubenswrapper[7387]: I0308 03:12:15.656753 7387 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f" exitCode=1 Mar 08 03:12:15.657618 master-0 kubenswrapper[7387]: I0308 03:12:15.656803 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f"} Mar 08 03:12:15.664880 master-0 kubenswrapper[7387]: I0308 03:12:15.664294 7387 scope.go:117] "RemoveContainer" containerID="f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f" Mar 08 03:12:15.989985 master-0 kubenswrapper[7387]: I0308 03:12:15.987260 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 03:12:16.106584 master-0 kubenswrapper[7387]: I0308 03:12:16.106457 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir\") pod \"ed2e0194-6b50-4478-aba4-21193d2c18aa\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " Mar 08 03:12:16.106816 master-0 kubenswrapper[7387]: I0308 03:12:16.106691 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access\") pod \"ed2e0194-6b50-4478-aba4-21193d2c18aa\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " Mar 08 03:12:16.107231 master-0 kubenswrapper[7387]: I0308 03:12:16.107199 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock\") pod \"ed2e0194-6b50-4478-aba4-21193d2c18aa\" (UID: \"ed2e0194-6b50-4478-aba4-21193d2c18aa\") " Mar 08 03:12:16.107231 master-0 kubenswrapper[7387]: I0308 03:12:16.107207 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "ed2e0194-6b50-4478-aba4-21193d2c18aa" (UID: "ed2e0194-6b50-4478-aba4-21193d2c18aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:16.107496 master-0 kubenswrapper[7387]: I0308 03:12:16.107465 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:16.107546 master-0 kubenswrapper[7387]: I0308 03:12:16.107510 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed2e0194-6b50-4478-aba4-21193d2c18aa" (UID: "ed2e0194-6b50-4478-aba4-21193d2c18aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:16.109491 master-0 kubenswrapper[7387]: I0308 03:12:16.109462 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed2e0194-6b50-4478-aba4-21193d2c18aa" (UID: "ed2e0194-6b50-4478-aba4-21193d2c18aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:12:16.208670 master-0 kubenswrapper[7387]: I0308 03:12:16.208538 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed2e0194-6b50-4478-aba4-21193d2c18aa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:16.208670 master-0 kubenswrapper[7387]: I0308 03:12:16.208595 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed2e0194-6b50-4478-aba4-21193d2c18aa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:16.666104 master-0 kubenswrapper[7387]: I0308 03:12:16.666039 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"da5c0193c648331dfa0a6bd33ec4c599a059bf9e4842b26f52002f9bec9abbb4"} Mar 08 03:12:16.669141 master-0 kubenswrapper[7387]: I0308 03:12:16.669081 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"ed2e0194-6b50-4478-aba4-21193d2c18aa","Type":"ContainerDied","Data":"5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150"} Mar 08 03:12:16.669275 master-0 kubenswrapper[7387]: I0308 03:12:16.669143 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150" Mar 08 03:12:16.669275 master-0 kubenswrapper[7387]: I0308 03:12:16.669230 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 03:12:16.740503 master-0 kubenswrapper[7387]: I0308 03:12:16.740390 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:17.135017 master-0 kubenswrapper[7387]: E0308 03:12:17.134881 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:17.679531 master-0 kubenswrapper[7387]: I0308 03:12:17.679432 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/0.log" Mar 08 03:12:17.679531 master-0 kubenswrapper[7387]: I0308 03:12:17.679519 7387 generic.go:334] "Generic (PLEG): container finished" podID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" containerID="8ab87543a0dca707df87062a9fccbc3d1ab6ac26bb171ba825afd502c52f108c" exitCode=1 Mar 08 03:12:17.680611 master-0 kubenswrapper[7387]: I0308 03:12:17.679563 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerDied","Data":"8ab87543a0dca707df87062a9fccbc3d1ab6ac26bb171ba825afd502c52f108c"} Mar 08 03:12:17.680611 master-0 kubenswrapper[7387]: I0308 03:12:17.680147 7387 scope.go:117] "RemoveContainer" containerID="8ab87543a0dca707df87062a9fccbc3d1ab6ac26bb171ba825afd502c52f108c" Mar 08 03:12:18.689416 master-0 kubenswrapper[7387]: I0308 03:12:18.689323 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/0.log" Mar 08 03:12:18.690264 master-0 kubenswrapper[7387]: I0308 03:12:18.689438 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerStarted","Data":"df227d89587fe4b6db1c506d3364812306abac68c1497c581534f430e3bbb731"} Mar 08 03:12:18.900171 master-0 kubenswrapper[7387]: I0308 03:12:18.900053 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:12:18.900171 master-0 kubenswrapper[7387]: I0308 03:12:18.900135 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:12:18.969029 master-0 kubenswrapper[7387]: I0308 03:12:18.968844 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:12:19.110887 master-0 kubenswrapper[7387]: I0308 03:12:19.110797 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:12:19.110887 master-0 kubenswrapper[7387]: I0308 03:12:19.110881 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:12:19.181349 master-0 kubenswrapper[7387]: I0308 03:12:19.181282 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:12:19.772622 master-0 kubenswrapper[7387]: I0308 03:12:19.772561 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:12:19.773215 master-0 kubenswrapper[7387]: I0308 03:12:19.772654 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:12:20.648866 master-0 kubenswrapper[7387]: I0308 03:12:20.648781 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:12:20.649261 master-0 kubenswrapper[7387]: I0308 03:12:20.649199 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:12:20.683869 master-0 kubenswrapper[7387]: I0308 03:12:20.683807 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:12:20.764362 master-0 kubenswrapper[7387]: I0308 03:12:20.764306 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:12:21.701090 master-0 kubenswrapper[7387]: I0308 03:12:21.700946 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:12:21.702119 master-0 kubenswrapper[7387]: I0308 03:12:21.702064 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:12:22.759768 master-0 kubenswrapper[7387]: I0308 03:12:22.759682 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ljh97" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="registry-server" probeResult="failure" output=< Mar 08 03:12:22.759768 master-0 kubenswrapper[7387]: timeout: failed to connect service ":50051" within 1s Mar 08 03:12:22.759768 master-0 kubenswrapper[7387]: > Mar 08 03:12:23.319667 master-0 kubenswrapper[7387]: I0308 03:12:23.319545 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:12:26.739583 master-0 kubenswrapper[7387]: I0308 03:12:26.739444 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:26.845125 master-0 kubenswrapper[7387]: I0308 03:12:26.845013 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:12:27.136090 master-0 kubenswrapper[7387]: E0308 03:12:27.135944 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:27.398424 master-0 kubenswrapper[7387]: E0308 03:12:27.398059 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:12:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:12:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:12:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:12:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:27.761383 master-0 kubenswrapper[7387]: I0308 03:12:27.761328 7387 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10" exitCode=0 Mar 08 03:12:29.973612 master-0 kubenswrapper[7387]: I0308 03:12:29.973534 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 08 03:12:29.974270 master-0 kubenswrapper[7387]: I0308 03:12:29.973647 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:12:30.094251 master-0 kubenswrapper[7387]: I0308 03:12:30.094180 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 08 03:12:30.094470 master-0 kubenswrapper[7387]: I0308 03:12:30.094285 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 08 03:12:30.094470 master-0 kubenswrapper[7387]: I0308 03:12:30.094367 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:30.094617 master-0 kubenswrapper[7387]: I0308 03:12:30.094516 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:12:30.094690 master-0 kubenswrapper[7387]: I0308 03:12:30.094668 7387 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:30.094755 master-0 kubenswrapper[7387]: I0308 03:12:30.094691 7387 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:12:30.781699 master-0 kubenswrapper[7387]: I0308 03:12:30.781624 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 08 03:12:30.782034 master-0 kubenswrapper[7387]: I0308 03:12:30.781705 7387 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56" exitCode=137 Mar 08 03:12:30.782034 master-0 kubenswrapper[7387]: I0308 03:12:30.781771 7387 scope.go:117] "RemoveContainer" containerID="aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10" Mar 08 03:12:30.782034 master-0 kubenswrapper[7387]: I0308 03:12:30.781832 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:12:30.807561 master-0 kubenswrapper[7387]: I0308 03:12:30.803497 7387 scope.go:117] "RemoveContainer" containerID="e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56" Mar 08 03:12:30.824760 master-0 kubenswrapper[7387]: I0308 03:12:30.824703 7387 scope.go:117] "RemoveContainer" containerID="aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10" Mar 08 03:12:30.825441 master-0 kubenswrapper[7387]: E0308 03:12:30.825363 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10\": container with ID starting with aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10 not found: ID does not exist" containerID="aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10" Mar 08 03:12:30.825441 master-0 kubenswrapper[7387]: I0308 03:12:30.825425 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10"} err="failed to get container status \"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10\": rpc error: code = NotFound desc = could not find container \"aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10\": container with ID starting with aa2691e43f5a9cad8fc8af5208972f2ff55b688600781969c01beedcd7050c10 not found: ID does not exist" Mar 08 03:12:30.825640 master-0 kubenswrapper[7387]: I0308 03:12:30.825462 7387 scope.go:117] "RemoveContainer" containerID="e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56" Mar 08 03:12:30.826043 master-0 kubenswrapper[7387]: E0308 03:12:30.825986 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56\": container with ID starting with e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56 not found: ID does not exist" containerID="e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56" Mar 08 03:12:30.826126 master-0 kubenswrapper[7387]: I0308 03:12:30.826035 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56"} err="failed to get container status \"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56\": rpc error: code = NotFound desc = could not find container \"e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56\": container with ID starting with e9d535fbda5084e4f8056f4fa3682180009f17ff7a894af1bf7d64a04c950d56 not found: ID does not exist" Mar 08 03:12:31.449658 master-0 kubenswrapper[7387]: I0308 03:12:31.449542 7387 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-dn4ll container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 08 03:12:31.449658 master-0 kubenswrapper[7387]: I0308 03:12:31.449635 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 08 03:12:31.771061 master-0 kubenswrapper[7387]: I0308 03:12:31.770980 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 08 03:12:31.771634 master-0 kubenswrapper[7387]: I0308 03:12:31.771583 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:12:32.799554 master-0 kubenswrapper[7387]: I0308 03:12:32.799464 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:12:32.800325 master-0 kubenswrapper[7387]: I0308 03:12:32.799587 7387 generic.go:334] "Generic (PLEG): container finished" podID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerID="0cb275b613648ba82dd895945a8f72c136f919a1708eb582688a065e13a9ce66" exitCode=1 Mar 08 03:12:33.871424 master-0 kubenswrapper[7387]: E0308 03:12:33.871319 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:12:33.872423 master-0 kubenswrapper[7387]: I0308 03:12:33.872057 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:12:33.896110 master-0 kubenswrapper[7387]: W0308 03:12:33.896034 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-72f6f9882a20c168411a03a57057317c3c794c47896b968c0ad881097d93c726 WatchSource:0}: Error finding container 72f6f9882a20c168411a03a57057317c3c794c47896b968c0ad881097d93c726: Status 404 returned error can't find the container with id 72f6f9882a20c168411a03a57057317c3c794c47896b968c0ad881097d93c726 Mar 08 03:12:34.004859 master-0 kubenswrapper[7387]: E0308 03:12:34.004607 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189abf199102aa88 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:11:59.851670152 +0000 UTC m=+56.246145833,LastTimestamp:2026-03-08 03:11:59.851670152 +0000 UTC m=+56.246145833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:12:34.007204 master-0 kubenswrapper[7387]: I0308 03:12:34.007162 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 08 03:12:34.007389 master-0 kubenswrapper[7387]: I0308 03:12:34.007352 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 08 03:12:34.827684 master-0 kubenswrapper[7387]: I0308 03:12:34.827573 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f" exitCode=0 Mar 08 03:12:36.739616 master-0 kubenswrapper[7387]: I0308 03:12:36.739488 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:37.136395 master-0 kubenswrapper[7387]: E0308 03:12:37.136309 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Mar 08 03:12:37.399342 master-0 kubenswrapper[7387]: E0308 03:12:37.399155 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:42.888555 master-0 kubenswrapper[7387]: I0308 03:12:42.888455 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/0.log" Mar 08 03:12:42.889459 master-0 kubenswrapper[7387]: I0308 03:12:42.888580 7387 generic.go:334] "Generic (PLEG): container finished" podID="89fc77c9-b444-4828-8a35-c63ea9335245" containerID="5ea4d742313470919626ed619f63545042ece5a1573517854bb097c5ce7c3645" exitCode=255 Mar 08 03:12:43.906332 master-0 kubenswrapper[7387]: I0308 03:12:43.906239 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:12:43.907239 master-0 kubenswrapper[7387]: I0308 03:12:43.906346 7387 generic.go:334] "Generic (PLEG): container finished" podID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerID="2569a7eccce46264a4c7e0024d1b136ccb829cb434ec57e4613d364f065d0db9" exitCode=1 Mar 08 03:12:44.007306 master-0 kubenswrapper[7387]: I0308 03:12:44.007225 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 08 03:12:44.007306 master-0 kubenswrapper[7387]: I0308 03:12:44.007290 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 08 03:12:44.915107 master-0 kubenswrapper[7387]: I0308 03:12:44.914974 7387 generic.go:334] "Generic (PLEG): container finished" podID="5a058138-8039-4841-821b-7ee5bb8648e4" containerID="0ece4a43051b1635cbb843e7e2b46319cb5de6a10e2de8626c1fb83227bc0d72" exitCode=0 Mar 08 03:12:47.136952 master-0 kubenswrapper[7387]: E0308 03:12:47.136839 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:47.400067 master-0 kubenswrapper[7387]: E0308 03:12:47.399867 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:49.945824 master-0 kubenswrapper[7387]: I0308 03:12:49.945749 7387 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c" exitCode=0 Mar 08 03:12:54.007330 master-0 kubenswrapper[7387]: I0308 03:12:54.007265 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 08 03:12:54.007837 master-0 kubenswrapper[7387]: I0308 03:12:54.007341 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 08 03:12:55.985673 master-0 kubenswrapper[7387]: I0308 03:12:55.985552 7387 generic.go:334] "Generic (PLEG): container finished" podID="1fa64f1b-9f10-488b-8f94-1600774062c4" containerID="97e7e8e1d4c76162fdd36f707ca3e2faaa5f8b65907e58ff8edb116f08fe408b" exitCode=0 Mar 08 03:12:57.138310 master-0 kubenswrapper[7387]: E0308 03:12:57.138160 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:12:57.138310 master-0 kubenswrapper[7387]: I0308 03:12:57.138265 7387 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 03:12:57.400852 master-0 kubenswrapper[7387]: E0308 03:12:57.400683 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:13:00.013045 master-0 kubenswrapper[7387]: I0308 03:13:00.012982 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/0.log" Mar 08 03:13:00.013955 master-0 kubenswrapper[7387]: I0308 03:13:00.013558 7387 generic.go:334] "Generic (PLEG): container finished" podID="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" containerID="c5eec4110852b5b6f65ead45beeb23e454a4f0a36ca8d676067c0e98d6a8439c" exitCode=1 Mar 08 03:13:00.016085 master-0 kubenswrapper[7387]: I0308 03:13:00.016026 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d446527-f3fd-4a37-a980-7445031928d1" containerID="14837a65d7b37118db204275e04a4816d1b952e719453adc75bef1d793ecb182" exitCode=0 Mar 08 03:13:01.024577 master-0 kubenswrapper[7387]: I0308 03:13:01.024493 7387 generic.go:334] "Generic (PLEG): container finished" podID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerID="107e7aadbde6b65c42eb4756264c5507aea9b4627e7947de6f6b874799048d52" exitCode=0 Mar 08 03:13:05.052391 master-0 kubenswrapper[7387]: I0308 03:13:05.052227 7387 generic.go:334] "Generic (PLEG): container finished" podID="89e15db4-c541-4d53-878d-706fa022f970" containerID="6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0" exitCode=0 Mar 08 03:13:05.775525 master-0 kubenswrapper[7387]: E0308 03:13:05.775366 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:13:05.775853 master-0 kubenswrapper[7387]: E0308 03:13:05.775712 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 08 03:13:05.775853 master-0 kubenswrapper[7387]: I0308 03:13:05.775748 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:13:05.776101 master-0 kubenswrapper[7387]: I0308 03:13:05.775928 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:13:05.776606 master-0 kubenswrapper[7387]: I0308 03:13:05.776546 7387 scope.go:117] "RemoveContainer" containerID="107e7aadbde6b65c42eb4756264c5507aea9b4627e7947de6f6b874799048d52" Mar 08 03:13:05.789848 master-0 kubenswrapper[7387]: I0308 03:13:05.789767 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:13:06.060599 master-0 kubenswrapper[7387]: I0308 03:13:06.060436 7387 generic.go:334] "Generic (PLEG): container finished" podID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" containerID="886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044" exitCode=0 Mar 08 03:13:06.064964 master-0 kubenswrapper[7387]: I0308 03:13:06.064874 7387 generic.go:334] "Generic (PLEG): container finished" podID="2468d2a3-ec65-4888-a86a-3f66fa311f56" containerID="5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b" exitCode=0 Mar 08 03:13:07.139399 master-0 kubenswrapper[7387]: E0308 03:13:07.139262 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 08 03:13:07.402171 master-0 kubenswrapper[7387]: E0308 03:13:07.401986 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 08 03:13:07.402171 master-0 kubenswrapper[7387]: E0308 03:13:07.402059 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:13:08.008267 master-0 kubenswrapper[7387]: E0308 03:13:08.008095 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-l2dj4.189abf1c8368e617 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2dj4,UID:7afe61b3-1460-48ed-9369-4d9893d2f4f4,APIVersion:v1,ResourceVersion:7533,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 22.092s (22.092s including waiting). Image size: 1272201949 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.508390935 +0000 UTC m=+68.902866626,LastTimestamp:2026-03-08 03:12:12.508390935 +0000 UTC m=+68.902866626,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:13:10.081217 master-0 kubenswrapper[7387]: I0308 03:13:10.081150 7387 status_manager.go:851] "Failed to get status for pod" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" pod="openshift-kube-scheduler/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Mar 08 03:13:14.117408 master-0 kubenswrapper[7387]: I0308 03:13:14.117318 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" exitCode=1 Mar 08 03:13:17.340620 master-0 kubenswrapper[7387]: E0308 03:13:17.340420 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="400ms" Mar 08 03:13:27.543378 master-0 kubenswrapper[7387]: E0308 03:13:27.543119 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:13:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:13:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:13:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:13:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:13:27.741554 master-0 kubenswrapper[7387]: E0308 03:13:27.741427 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 08 03:13:31.449724 master-0 kubenswrapper[7387]: I0308 03:13:31.449621 7387 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-dn4ll container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 08 03:13:31.450650 master-0 kubenswrapper[7387]: I0308 03:13:31.449723 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 08 03:13:37.544863 master-0 kubenswrapper[7387]: E0308 03:13:37.544796 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:13:38.542483 master-0 kubenswrapper[7387]: E0308 03:13:38.542369 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 08 03:13:39.792677 master-0 kubenswrapper[7387]: E0308 03:13:39.792600 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:13:39.793485 master-0 kubenswrapper[7387]: E0308 03:13:39.792832 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Mar 08 03:13:39.793485 master-0 kubenswrapper[7387]: I0308 03:13:39.792957 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:13:39.793485 master-0 kubenswrapper[7387]: I0308 03:13:39.793004 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:13:39.793485 master-0 kubenswrapper[7387]: I0308 03:13:39.793234 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0a8d4b89-fd81-4418-9f72-c8447fad86ad","Type":"ContainerDied","Data":"0cb275b613648ba82dd895945a8f72c136f919a1708eb582688a065e13a9ce66"} Mar 08 03:13:39.793485 master-0 kubenswrapper[7387]: I0308 03:13:39.793278 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f"} Mar 08 03:13:39.796176 master-0 kubenswrapper[7387]: I0308 03:13:39.796126 7387 scope.go:117] "RemoveContainer" containerID="14837a65d7b37118db204275e04a4816d1b952e719453adc75bef1d793ecb182" Mar 08 03:13:39.800454 master-0 kubenswrapper[7387]: I0308 03:13:39.800402 7387 scope.go:117] "RemoveContainer" containerID="97e7e8e1d4c76162fdd36f707ca3e2faaa5f8b65907e58ff8edb116f08fe408b" Mar 08 03:13:39.807096 master-0 kubenswrapper[7387]: I0308 03:13:39.807051 7387 scope.go:117] "RemoveContainer" containerID="0ece4a43051b1635cbb843e7e2b46319cb5de6a10e2de8626c1fb83227bc0d72" Mar 08 03:13:39.810819 master-0 kubenswrapper[7387]: I0308 03:13:39.810761 7387 scope.go:117] "RemoveContainer" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" Mar 08 03:13:39.811371 master-0 kubenswrapper[7387]: I0308 03:13:39.811317 7387 scope.go:117] "RemoveContainer" containerID="5ea4d742313470919626ed619f63545042ece5a1573517854bb097c5ce7c3645" Mar 08 03:13:39.811823 master-0 kubenswrapper[7387]: I0308 03:13:39.811772 7387 scope.go:117] "RemoveContainer" containerID="c5eec4110852b5b6f65ead45beeb23e454a4f0a36ca8d676067c0e98d6a8439c" Mar 08 03:13:39.812071 master-0 kubenswrapper[7387]: I0308 03:13:39.811880 7387 scope.go:117] "RemoveContainer" containerID="886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044" Mar 08 03:13:39.816123 master-0 kubenswrapper[7387]: I0308 03:13:39.816074 7387 scope.go:117] "RemoveContainer" containerID="0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c" Mar 08 03:13:39.817861 master-0 kubenswrapper[7387]: I0308 03:13:39.817812 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:13:40.288654 master-0 kubenswrapper[7387]: I0308 03:13:40.288615 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/0.log" Mar 08 03:13:40.290561 master-0 kubenswrapper[7387]: I0308 03:13:40.290532 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/0.log" Mar 08 03:13:40.716028 master-0 kubenswrapper[7387]: I0308 03:13:40.715875 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:13:40.716028 master-0 kubenswrapper[7387]: I0308 03:13:40.715971 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:13:40.722977 master-0 kubenswrapper[7387]: I0308 03:13:40.722943 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:13:40.723094 master-0 kubenswrapper[7387]: I0308 03:13:40.722993 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:13:40.910480 master-0 kubenswrapper[7387]: I0308 03:13:40.910410 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock\") pod \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " Mar 08 03:13:40.910480 master-0 kubenswrapper[7387]: I0308 03:13:40.910491 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access\") pod \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.910538 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock\") pod \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.910626 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir\") pod \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\" (UID: \"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b\") " Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.910648 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir\") pod \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.910676 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access\") pod \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\" (UID: \"0a8d4b89-fd81-4418-9f72-c8447fad86ad\") " Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.911189 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock" (OuterVolumeSpecName: "var-lock") pod "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" (UID: "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.911240 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock" (OuterVolumeSpecName: "var-lock") pod "0a8d4b89-fd81-4418-9f72-c8447fad86ad" (UID: "0a8d4b89-fd81-4418-9f72-c8447fad86ad"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.911272 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0a8d4b89-fd81-4418-9f72-c8447fad86ad" (UID: "0a8d4b89-fd81-4418-9f72-c8447fad86ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:13:40.911336 master-0 kubenswrapper[7387]: I0308 03:13:40.911312 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" (UID: "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:13:40.914362 master-0 kubenswrapper[7387]: I0308 03:13:40.914317 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" (UID: "0a2e5993-e0cb-4c63-9dda-abbb60bfe42b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:13:40.915553 master-0 kubenswrapper[7387]: I0308 03:13:40.915507 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0a8d4b89-fd81-4418-9f72-c8447fad86ad" (UID: "0a8d4b89-fd81-4418-9f72-c8447fad86ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011572 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011625 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011639 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011651 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8d4b89-fd81-4418-9f72-c8447fad86ad-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011663 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0a8d4b89-fd81-4418-9f72-c8447fad86ad-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.011662 master-0 kubenswrapper[7387]: I0308 03:13:41.011674 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a2e5993-e0cb-4c63-9dda-abbb60bfe42b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:13:41.302986 master-0 kubenswrapper[7387]: I0308 03:13:41.302762 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:13:41.302986 master-0 kubenswrapper[7387]: I0308 03:13:41.302897 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:13:41.305120 master-0 kubenswrapper[7387]: I0308 03:13:41.305039 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:13:41.305387 master-0 kubenswrapper[7387]: I0308 03:13:41.305173 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:13:42.012404 master-0 kubenswrapper[7387]: E0308 03:13:42.012190 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-ljh97.189abf1c85ab8abd openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ljh97,UID:4df5a48e-425c-443e-bfdf-6d57fe1e4638,APIVersion:v1,ResourceVersion:7704,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 20.102s (20.102s including waiting). Image size: 1733328350 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.546312893 +0000 UTC m=+68.940788584,LastTimestamp:2026-03-08 03:12:12.546312893 +0000 UTC m=+68.940788584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:13:46.341727 master-0 kubenswrapper[7387]: I0308 03:13:46.341667 7387 generic.go:334] "Generic (PLEG): container finished" podID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerID="207b42b97b0cc7b2a3b3fe717f857e83a1274408fc29faf61812a15be3fc5f86" exitCode=0 Mar 08 03:13:46.841136 master-0 kubenswrapper[7387]: I0308 03:13:46.840476 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:13:46.841136 master-0 kubenswrapper[7387]: I0308 03:13:46.840573 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:13:46.841136 master-0 kubenswrapper[7387]: I0308 03:13:46.840636 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:13:46.841136 master-0 kubenswrapper[7387]: I0308 03:13:46.840719 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:13:47.546678 master-0 kubenswrapper[7387]: E0308 03:13:47.546573 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:13:49.362484 master-0 kubenswrapper[7387]: I0308 03:13:49.362409 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/1.log" Mar 08 03:13:49.363583 master-0 kubenswrapper[7387]: I0308 03:13:49.363532 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/0.log" Mar 08 03:13:49.363643 master-0 kubenswrapper[7387]: I0308 03:13:49.363592 7387 generic.go:334] "Generic (PLEG): container finished" podID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" containerID="df227d89587fe4b6db1c506d3364812306abac68c1497c581534f430e3bbb731" exitCode=255 Mar 08 03:13:50.143032 master-0 kubenswrapper[7387]: E0308 03:13:50.142925 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 08 03:13:50.371638 master-0 kubenswrapper[7387]: I0308 03:13:50.371544 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/0.log" Mar 08 03:13:50.372758 master-0 kubenswrapper[7387]: I0308 03:13:50.372040 7387 generic.go:334] "Generic (PLEG): container finished" podID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerID="847ec71b717fbc403d7670e2fb6fcb0eb16c5961bfffd67ba80ebb137144703d" exitCode=1 Mar 08 03:13:50.373773 master-0 kubenswrapper[7387]: I0308 03:13:50.373731 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/0.log" Mar 08 03:13:50.373929 master-0 kubenswrapper[7387]: I0308 03:13:50.373770 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" exitCode=1 Mar 08 03:13:50.376557 master-0 kubenswrapper[7387]: I0308 03:13:50.376503 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/0.log" Mar 08 03:13:50.376685 master-0 kubenswrapper[7387]: I0308 03:13:50.376567 7387 generic.go:334] "Generic (PLEG): container finished" podID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerID="a8f3f14f501b72ff362550257f13a332eecf70ec4f446aeb3d199baf5fd9fcca" exitCode=1 Mar 08 03:13:52.807613 master-0 kubenswrapper[7387]: E0308 03:13:52.807506 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 03:13:53.401263 master-0 kubenswrapper[7387]: I0308 03:13:53.401051 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee" exitCode=0 Mar 08 03:13:55.778400 master-0 kubenswrapper[7387]: I0308 03:13:55.778298 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:13:55.779517 master-0 kubenswrapper[7387]: I0308 03:13:55.778404 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:13:55.779517 master-0 kubenswrapper[7387]: I0308 03:13:55.778563 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:13:55.779517 master-0 kubenswrapper[7387]: I0308 03:13:55.778605 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:13:55.788581 master-0 kubenswrapper[7387]: I0308 03:13:55.788490 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:13:55.788807 master-0 kubenswrapper[7387]: I0308 03:13:55.788590 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:13:55.788807 master-0 kubenswrapper[7387]: I0308 03:13:55.788640 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:13:55.788807 master-0 kubenswrapper[7387]: I0308 03:13:55.788751 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:13:56.843247 master-0 kubenswrapper[7387]: I0308 03:13:56.841632 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:13:56.843247 master-0 kubenswrapper[7387]: I0308 03:13:56.841706 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:13:56.843247 master-0 kubenswrapper[7387]: I0308 03:13:56.841803 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:13:56.843247 master-0 kubenswrapper[7387]: I0308 03:13:56.841829 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:13:57.547127 master-0 kubenswrapper[7387]: E0308 03:13:57.546884 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:13:59.443611 master-0 kubenswrapper[7387]: I0308 03:13:59.443526 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/0.log" Mar 08 03:13:59.443611 master-0 kubenswrapper[7387]: I0308 03:13:59.443599 7387 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="11de5739554b7c94cfe0fa61f3b1195f2e9f62f484bc837ca53fa9727626c6dd" exitCode=1 Mar 08 03:14:03.344706 master-0 kubenswrapper[7387]: E0308 03:14:03.344640 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 08 03:14:05.778332 master-0 kubenswrapper[7387]: I0308 03:14:05.778220 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:05.778332 master-0 kubenswrapper[7387]: I0308 03:14:05.778312 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:05.787237 master-0 kubenswrapper[7387]: I0308 03:14:05.787170 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:05.787237 master-0 kubenswrapper[7387]: I0308 03:14:05.787222 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:06.840470 master-0 kubenswrapper[7387]: I0308 03:14:06.840366 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:14:06.840470 master-0 kubenswrapper[7387]: I0308 03:14:06.840454 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:14:06.841723 master-0 kubenswrapper[7387]: I0308 03:14:06.840462 7387 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-4pgcf container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" start-of-body= Mar 08 03:14:06.841723 master-0 kubenswrapper[7387]: I0308 03:14:06.840551 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" podUID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.16:8080/healthz\": dial tcp 10.128.0.16:8080: connect: connection refused" Mar 08 03:14:07.548022 master-0 kubenswrapper[7387]: E0308 03:14:07.547870 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:07.548022 master-0 kubenswrapper[7387]: E0308 03:14:07.548007 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:14:10.082367 master-0 kubenswrapper[7387]: I0308 03:14:10.082270 7387 status_manager.go:851] "Failed to get status for pod" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods authentication-operator-7c6989d6c4-k8xgg)" Mar 08 03:14:13.822697 master-0 kubenswrapper[7387]: E0308 03:14:13.822599 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: E0308 03:14:13.822812 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.029s" Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.822845 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"72f6f9882a20c168411a03a57057317c3c794c47896b968c0ad881097d93c726"} Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.822889 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerDied","Data":"5ea4d742313470919626ed619f63545042ece5a1573517854bb097c5ce7c3645"} Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.822948 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b","Type":"ContainerDied","Data":"2569a7eccce46264a4c7e0024d1b136ccb829cb434ec57e4613d364f065d0db9"} Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.823175 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.823273 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.823284 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:14:13.823550 master-0 kubenswrapper[7387]: I0308 03:14:13.823296 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerDied","Data":"0ece4a43051b1635cbb843e7e2b46319cb5de6a10e2de8626c1fb83227bc0d72"} Mar 08 03:14:13.825232 master-0 kubenswrapper[7387]: I0308 03:14:13.825158 7387 scope.go:117] "RemoveContainer" containerID="207b42b97b0cc7b2a3b3fe717f857e83a1274408fc29faf61812a15be3fc5f86" Mar 08 03:14:13.825384 master-0 kubenswrapper[7387]: I0308 03:14:13.825258 7387 scope.go:117] "RemoveContainer" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" Mar 08 03:14:13.828268 master-0 kubenswrapper[7387]: I0308 03:14:13.828222 7387 scope.go:117] "RemoveContainer" containerID="6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0" Mar 08 03:14:13.828676 master-0 kubenswrapper[7387]: I0308 03:14:13.828620 7387 scope.go:117] "RemoveContainer" containerID="5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b" Mar 08 03:14:13.834574 master-0 kubenswrapper[7387]: I0308 03:14:13.834518 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:14:14.562740 master-0 kubenswrapper[7387]: I0308 03:14:14.562682 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/0.log" Mar 08 03:14:15.777876 master-0 kubenswrapper[7387]: I0308 03:14:15.777721 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:15.777876 master-0 kubenswrapper[7387]: I0308 03:14:15.777759 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:15.777876 master-0 kubenswrapper[7387]: I0308 03:14:15.777800 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:15.777876 master-0 kubenswrapper[7387]: I0308 03:14:15.777811 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:15.786639 master-0 kubenswrapper[7387]: I0308 03:14:15.786583 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:15.786988 master-0 kubenswrapper[7387]: I0308 03:14:15.786899 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:15.787113 master-0 kubenswrapper[7387]: I0308 03:14:15.786599 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:15.787113 master-0 kubenswrapper[7387]: I0308 03:14:15.787030 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:16.015667 master-0 kubenswrapper[7387]: E0308 03:14:16.015450 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-qwkmn.189abf1c8622360e openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-qwkmn,UID:3a9142af-1b48-49b1-8e0f-53e8494d5e01,APIVersion:v1,ResourceVersion:7640,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 20.115s (20.115s including waiting). Image size: 1229556414 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.554089998 +0000 UTC m=+68.948565679,LastTimestamp:2026-03-08 03:12:12.554089998 +0000 UTC m=+68.948565679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:14:19.746174 master-0 kubenswrapper[7387]: E0308 03:14:19.746051 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:14:25.778127 master-0 kubenswrapper[7387]: I0308 03:14:25.778034 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:25.779091 master-0 kubenswrapper[7387]: I0308 03:14:25.778135 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:25.786870 master-0 kubenswrapper[7387]: I0308 03:14:25.786792 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:25.787025 master-0 kubenswrapper[7387]: I0308 03:14:25.786954 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:26.739771 master-0 kubenswrapper[7387]: I0308 03:14:26.739654 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:26.832751 master-0 kubenswrapper[7387]: E0308 03:14:26.832701 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 03:14:27.659720 master-0 kubenswrapper[7387]: I0308 03:14:27.659595 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9" exitCode=0 Mar 08 03:14:27.794567 master-0 kubenswrapper[7387]: E0308 03:14:27.794373 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:14:17Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:14:17Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:14:17Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:14:17Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:35.778039 master-0 kubenswrapper[7387]: I0308 03:14:35.777940 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:35.778039 master-0 kubenswrapper[7387]: I0308 03:14:35.778003 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:35.778868 master-0 kubenswrapper[7387]: I0308 03:14:35.778033 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/healthz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:35.778868 master-0 kubenswrapper[7387]: I0308 03:14:35.778084 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:35.786539 master-0 kubenswrapper[7387]: I0308 03:14:35.786488 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:35.786669 master-0 kubenswrapper[7387]: I0308 03:14:35.786578 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:35.786853 master-0 kubenswrapper[7387]: I0308 03:14:35.786803 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:35.786951 master-0 kubenswrapper[7387]: I0308 03:14:35.786871 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/healthz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:36.417965 master-0 kubenswrapper[7387]: E0308 03:14:36.417847 7387 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ef7c0a_7c6f_45aa_865d_1e247110b265.slice/crio-conmon-722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ef7c0a_7c6f_45aa_865d_1e247110b265.slice/crio-722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d.scope\": RecentStats: unable to find data in memory cache]" Mar 08 03:14:36.723258 master-0 kubenswrapper[7387]: I0308 03:14:36.723078 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/1.log" Mar 08 03:14:36.723794 master-0 kubenswrapper[7387]: I0308 03:14:36.723730 7387 generic.go:334] "Generic (PLEG): container finished" podID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerID="722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d" exitCode=255 Mar 08 03:14:36.739118 master-0 kubenswrapper[7387]: I0308 03:14:36.739003 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:36.747875 master-0 kubenswrapper[7387]: E0308 03:14:36.747443 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:14:37.795612 master-0 kubenswrapper[7387]: E0308 03:14:37.795520 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:39.744657 master-0 kubenswrapper[7387]: I0308 03:14:39.744589 7387 generic.go:334] "Generic (PLEG): container finished" podID="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" containerID="ae6eee5afe5e46fa6bdda2c614fc3054391ae41ef6fbf435d604af42a3bf8ed4" exitCode=0 Mar 08 03:14:41.761850 master-0 kubenswrapper[7387]: I0308 03:14:41.761760 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" exitCode=1 Mar 08 03:14:43.738948 master-0 kubenswrapper[7387]: I0308 03:14:43.738851 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 08 03:14:44.784771 master-0 kubenswrapper[7387]: I0308 03:14:44.784705 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/1.log" Mar 08 03:14:44.785593 master-0 kubenswrapper[7387]: I0308 03:14:44.785427 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/0.log" Mar 08 03:14:44.785593 master-0 kubenswrapper[7387]: I0308 03:14:44.785476 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" exitCode=1 Mar 08 03:14:45.777548 master-0 kubenswrapper[7387]: I0308 03:14:45.777442 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:45.777548 master-0 kubenswrapper[7387]: I0308 03:14:45.777524 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:45.786312 master-0 kubenswrapper[7387]: I0308 03:14:45.786226 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:45.786312 master-0 kubenswrapper[7387]: I0308 03:14:45.786274 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:45.793383 master-0 kubenswrapper[7387]: I0308 03:14:45.793304 7387 generic.go:334] "Generic (PLEG): container finished" podID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerID="101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc" exitCode=0 Mar 08 03:14:47.796284 master-0 kubenswrapper[7387]: E0308 03:14:47.795944 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:14:47.837629 master-0 kubenswrapper[7387]: E0308 03:14:47.837526 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:14:47.838088 master-0 kubenswrapper[7387]: E0308 03:14:47.837784 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 08 03:14:47.838088 master-0 kubenswrapper[7387]: I0308 03:14:47.837823 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:14:47.838088 master-0 kubenswrapper[7387]: I0308 03:14:47.837862 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerDied","Data":"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c"} Mar 08 03:14:47.838088 master-0 kubenswrapper[7387]: I0308 03:14:47.837942 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerDied","Data":"97e7e8e1d4c76162fdd36f707ca3e2faaa5f8b65907e58ff8edb116f08fe408b"} Mar 08 03:14:47.838559 master-0 kubenswrapper[7387]: I0308 03:14:47.838510 7387 scope.go:117] "RemoveContainer" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:14:47.838777 master-0 kubenswrapper[7387]: E0308 03:14:47.838722 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:14:47.846590 master-0 kubenswrapper[7387]: I0308 03:14:47.846535 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:14:50.018425 master-0 kubenswrapper[7387]: E0308 03:14:50.018199 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-bv2v9.189abf1c871fbc6d openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-bv2v9,UID:10895809-a444-42ec-a41f-111e17f6beb3,APIVersion:v1,ResourceVersion:7523,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 22.163s (22.163s including waiting). Image size: 1220167376 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.570705005 +0000 UTC m=+68.965180716,LastTimestamp:2026-03-08 03:12:12.570705005 +0000 UTC m=+68.965180716,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:14:50.566618 master-0 kubenswrapper[7387]: I0308 03:14:50.566512 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:14:50.566618 master-0 kubenswrapper[7387]: I0308 03:14:50.566600 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:14:50.566989 master-0 kubenswrapper[7387]: I0308 03:14:50.566531 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:14:50.566989 master-0 kubenswrapper[7387]: I0308 03:14:50.566774 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:14:53.748554 master-0 kubenswrapper[7387]: E0308 03:14:53.748476 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:14:55.778555 master-0 kubenswrapper[7387]: I0308 03:14:55.778461 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:14:55.778555 master-0 kubenswrapper[7387]: I0308 03:14:55.778575 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:14:55.787293 master-0 kubenswrapper[7387]: I0308 03:14:55.787211 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:14:55.787463 master-0 kubenswrapper[7387]: I0308 03:14:55.787321 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:14:57.797347 master-0 kubenswrapper[7387]: E0308 03:14:57.797242 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:15:00.566097 master-0 kubenswrapper[7387]: I0308 03:15:00.565888 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:00.566097 master-0 kubenswrapper[7387]: I0308 03:15:00.565991 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:00.566097 master-0 kubenswrapper[7387]: I0308 03:15:00.566052 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:00.566097 master-0 kubenswrapper[7387]: I0308 03:15:00.566077 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:05.778332 master-0 kubenswrapper[7387]: I0308 03:15:05.778170 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:15:05.778332 master-0 kubenswrapper[7387]: I0308 03:15:05.778272 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:15:05.786896 master-0 kubenswrapper[7387]: I0308 03:15:05.786811 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:05.786896 master-0 kubenswrapper[7387]: I0308 03:15:05.786877 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:07.798464 master-0 kubenswrapper[7387]: E0308 03:15:07.798276 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:15:07.798464 master-0 kubenswrapper[7387]: E0308 03:15:07.798345 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:15:10.084077 master-0 kubenswrapper[7387]: I0308 03:15:10.083978 7387 status_manager.go:851] "Failed to get status for pod" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" pod="openshift-marketplace/redhat-marketplace-qwkmn" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods redhat-marketplace-qwkmn)" Mar 08 03:15:10.566305 master-0 kubenswrapper[7387]: I0308 03:15:10.566220 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:10.566496 master-0 kubenswrapper[7387]: I0308 03:15:10.566317 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:10.566496 master-0 kubenswrapper[7387]: I0308 03:15:10.566447 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:10.566644 master-0 kubenswrapper[7387]: I0308 03:15:10.566520 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:10.749839 master-0 kubenswrapper[7387]: E0308 03:15:10.749744 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:15:10.966650 master-0 kubenswrapper[7387]: I0308 03:15:10.966557 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/1.log" Mar 08 03:15:10.967446 master-0 kubenswrapper[7387]: I0308 03:15:10.967392 7387 generic.go:334] "Generic (PLEG): container finished" podID="5a058138-8039-4841-821b-7ee5bb8648e4" containerID="dc97f8f27bad8456e85d3556b0266da3f51b3219e17af7d58b019107138fa1da" exitCode=255 Mar 08 03:15:10.970178 master-0 kubenswrapper[7387]: I0308 03:15:10.970144 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/1.log" Mar 08 03:15:10.971019 master-0 kubenswrapper[7387]: I0308 03:15:10.970957 7387 generic.go:334] "Generic (PLEG): container finished" podID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" containerID="546471fba50615e89619e415aa22b95c50bac9cc8ea20a1f87e7260bbf84e270" exitCode=255 Mar 08 03:15:10.973779 master-0 kubenswrapper[7387]: I0308 03:15:10.973737 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/1.log" Mar 08 03:15:10.974417 master-0 kubenswrapper[7387]: I0308 03:15:10.974376 7387 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408" exitCode=255 Mar 08 03:15:10.977183 master-0 kubenswrapper[7387]: I0308 03:15:10.977147 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/1.log" Mar 08 03:15:10.978175 master-0 kubenswrapper[7387]: I0308 03:15:10.978128 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d446527-f3fd-4a37-a980-7445031928d1" containerID="0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1" exitCode=255 Mar 08 03:15:10.981076 master-0 kubenswrapper[7387]: I0308 03:15:10.981042 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/1.log" Mar 08 03:15:10.982053 master-0 kubenswrapper[7387]: I0308 03:15:10.982014 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/0.log" Mar 08 03:15:10.982169 master-0 kubenswrapper[7387]: I0308 03:15:10.982075 7387 generic.go:334] "Generic (PLEG): container finished" podID="89fc77c9-b444-4828-8a35-c63ea9335245" containerID="6a0ebfa9daddb42b992bf1e47626f21a3f530f0fb9ecbcd53e5eedae16779630" exitCode=255 Mar 08 03:15:10.984675 master-0 kubenswrapper[7387]: I0308 03:15:10.984635 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/1.log" Mar 08 03:15:10.985331 master-0 kubenswrapper[7387]: I0308 03:15:10.985287 7387 generic.go:334] "Generic (PLEG): container finished" podID="1fa64f1b-9f10-488b-8f94-1600774062c4" containerID="7f2168458d76e9e97ed4421cfc89aa215f737c7dfdedd5442acd38bfb2f3b2c4" exitCode=255 Mar 08 03:15:15.778088 master-0 kubenswrapper[7387]: I0308 03:15:15.777883 7387 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-rjwdp container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" start-of-body= Mar 08 03:15:15.778868 master-0 kubenswrapper[7387]: I0308 03:15:15.778124 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" podUID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.42:8081/readyz\": dial tcp 10.128.0.42:8081: connect: connection refused" Mar 08 03:15:15.786702 master-0 kubenswrapper[7387]: I0308 03:15:15.786620 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:15.786952 master-0 kubenswrapper[7387]: I0308 03:15:15.786713 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:20.566748 master-0 kubenswrapper[7387]: I0308 03:15:20.566617 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:20.566748 master-0 kubenswrapper[7387]: I0308 03:15:20.566716 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:21.850176 master-0 kubenswrapper[7387]: E0308 03:15:21.850046 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:15:21.851054 master-0 kubenswrapper[7387]: E0308 03:15:21.850304 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 08 03:15:21.851054 master-0 kubenswrapper[7387]: I0308 03:15:21.850390 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:15:21.851054 master-0 kubenswrapper[7387]: I0308 03:15:21.850427 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:15:21.851513 master-0 kubenswrapper[7387]: I0308 03:15:21.851454 7387 scope.go:117] "RemoveContainer" containerID="847ec71b717fbc403d7670e2fb6fcb0eb16c5961bfffd67ba80ebb137144703d" Mar 08 03:15:21.851716 master-0 kubenswrapper[7387]: I0308 03:15:21.851650 7387 scope.go:117] "RemoveContainer" containerID="8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408" Mar 08 03:15:21.861893 master-0 kubenswrapper[7387]: I0308 03:15:21.861776 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:15:23.083656 master-0 kubenswrapper[7387]: I0308 03:15:23.083563 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/0.log" Mar 08 03:15:23.087586 master-0 kubenswrapper[7387]: I0308 03:15:23.087540 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/1.log" Mar 08 03:15:24.021329 master-0 kubenswrapper[7387]: E0308 03:15:24.021126 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abf1c892c5c96 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.60508687 +0000 UTC m=+68.999562561,LastTimestamp:2026-03-08 03:12:12.60508687 +0000 UTC m=+68.999562561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:15:25.786779 master-0 kubenswrapper[7387]: I0308 03:15:25.786701 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:25.787887 master-0 kubenswrapper[7387]: I0308 03:15:25.786801 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:27.750833 master-0 kubenswrapper[7387]: E0308 03:15:27.750705 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:15:28.037156 master-0 kubenswrapper[7387]: E0308 03:15:28.036707 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:15:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:15:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:15:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:15:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:15:30.566074 master-0 kubenswrapper[7387]: I0308 03:15:30.565992 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:30.566653 master-0 kubenswrapper[7387]: I0308 03:15:30.566078 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:35.787379 master-0 kubenswrapper[7387]: I0308 03:15:35.787286 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:35.788376 master-0 kubenswrapper[7387]: I0308 03:15:35.787407 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:38.037555 master-0 kubenswrapper[7387]: E0308 03:15:38.037401 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:15:40.566312 master-0 kubenswrapper[7387]: I0308 03:15:40.566208 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:40.566312 master-0 kubenswrapper[7387]: I0308 03:15:40.566295 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:44.751605 master-0 kubenswrapper[7387]: E0308 03:15:44.751519 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:15:45.248697 master-0 kubenswrapper[7387]: I0308 03:15:45.248596 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/1.log" Mar 08 03:15:45.249327 master-0 kubenswrapper[7387]: I0308 03:15:45.249262 7387 generic.go:334] "Generic (PLEG): container finished" podID="89e15db4-c541-4d53-878d-706fa022f970" containerID="9a657401ad344c6bcb17809838c09bd965a31aa4d11aa9a3d44a7eea2ef4074b" exitCode=255 Mar 08 03:15:45.251586 master-0 kubenswrapper[7387]: I0308 03:15:45.251545 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/1.log" Mar 08 03:15:45.252172 master-0 kubenswrapper[7387]: I0308 03:15:45.252112 7387 generic.go:334] "Generic (PLEG): container finished" podID="2468d2a3-ec65-4888-a86a-3f66fa311f56" containerID="e0aecb58f6976eba8696296a6b4880e419ddc1ff4060c7d5c4b00288d7622719" exitCode=255 Mar 08 03:15:45.787451 master-0 kubenswrapper[7387]: I0308 03:15:45.787295 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:45.787451 master-0 kubenswrapper[7387]: I0308 03:15:45.787421 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:48.038600 master-0 kubenswrapper[7387]: E0308 03:15:48.038499 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:15:50.566589 master-0 kubenswrapper[7387]: I0308 03:15:50.566489 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:15:50.567456 master-0 kubenswrapper[7387]: I0308 03:15:50.566617 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:15:55.786623 master-0 kubenswrapper[7387]: I0308 03:15:55.786502 7387 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-c74s2 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" start-of-body= Mar 08 03:15:55.787455 master-0 kubenswrapper[7387]: I0308 03:15:55.786629 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" podUID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.41:8081/readyz\": dial tcp 10.128.0.41:8081: connect: connection refused" Mar 08 03:15:55.866064 master-0 kubenswrapper[7387]: E0308 03:15:55.865569 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:15:55.866064 master-0 kubenswrapper[7387]: E0308 03:15:55.865962 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.015s" Mar 08 03:15:55.866064 master-0 kubenswrapper[7387]: I0308 03:15:55.866015 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:15:55.866469 master-0 kubenswrapper[7387]: I0308 03:15:55.866098 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:15:55.866469 master-0 kubenswrapper[7387]: I0308 03:15:55.866125 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:15:55.867243 master-0 kubenswrapper[7387]: I0308 03:15:55.867172 7387 scope.go:117] "RemoveContainer" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:15:55.867776 master-0 kubenswrapper[7387]: I0308 03:15:55.867719 7387 scope.go:117] "RemoveContainer" containerID="a8f3f14f501b72ff362550257f13a332eecf70ec4f446aeb3d199baf5fd9fcca" Mar 08 03:15:55.876689 master-0 kubenswrapper[7387]: I0308 03:15:55.876634 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:15:56.348972 master-0 kubenswrapper[7387]: I0308 03:15:56.348762 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/0.log" Mar 08 03:15:58.024654 master-0 kubenswrapper[7387]: E0308 03:15:58.024447 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-l2dj4.189abf1c8a0f7f64 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2dj4,UID:7afe61b3-1460-48ed-9369-4d9893d2f4f4,APIVersion:v1,ResourceVersion:7533,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container: extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.619972452 +0000 UTC m=+69.014448133,LastTimestamp:2026-03-08 03:12:12.619972452 +0000 UTC m=+69.014448133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:15:58.039164 master-0 kubenswrapper[7387]: E0308 03:15:58.039102 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:00.565757 master-0 kubenswrapper[7387]: I0308 03:16:00.565668 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:00.565757 master-0 kubenswrapper[7387]: I0308 03:16:00.565757 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:01.754780 master-0 kubenswrapper[7387]: E0308 03:16:01.754658 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:16:08.040328 master-0 kubenswrapper[7387]: E0308 03:16:08.040230 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:08.040328 master-0 kubenswrapper[7387]: E0308 03:16:08.040297 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:16:10.085616 master-0 kubenswrapper[7387]: I0308 03:16:10.085511 7387 status_manager.go:851] "Failed to get status for pod" podUID="f78c05e1499b533b83f091333d61f045" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 08 03:16:10.566346 master-0 kubenswrapper[7387]: I0308 03:16:10.566271 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:10.566607 master-0 kubenswrapper[7387]: I0308 03:16:10.566362 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:12.549686 master-0 kubenswrapper[7387]: I0308 03:16:12.549628 7387 scope.go:117] "RemoveContainer" containerID="8f306ce0a691aaca594f05377489d0fedf338512ca0fc5f460eabd4f8b2245d1" Mar 08 03:16:18.756161 master-0 kubenswrapper[7387]: E0308 03:16:18.756054 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:16:20.566018 master-0 kubenswrapper[7387]: I0308 03:16:20.565865 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:20.566018 master-0 kubenswrapper[7387]: I0308 03:16:20.566000 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:28.364639 master-0 kubenswrapper[7387]: E0308 03:16:28.364363 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:16:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:16:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:16:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:16:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:29.880142 master-0 kubenswrapper[7387]: E0308 03:16:29.880068 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:16:29.880894 master-0 kubenswrapper[7387]: E0308 03:16:29.880283 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 08 03:16:29.880894 master-0 kubenswrapper[7387]: I0308 03:16:29.880318 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerDied","Data":"c5eec4110852b5b6f65ead45beeb23e454a4f0a36ca8d676067c0e98d6a8439c"} Mar 08 03:16:29.880894 master-0 kubenswrapper[7387]: I0308 03:16:29.880365 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerDied","Data":"14837a65d7b37118db204275e04a4816d1b952e719453adc75bef1d793ecb182"} Mar 08 03:16:29.881442 master-0 kubenswrapper[7387]: I0308 03:16:29.881347 7387 scope.go:117] "RemoveContainer" containerID="14837a65d7b37118db204275e04a4816d1b952e719453adc75bef1d793ecb182" Mar 08 03:16:29.882105 master-0 kubenswrapper[7387]: I0308 03:16:29.881980 7387 scope.go:117] "RemoveContainer" containerID="0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1" Mar 08 03:16:29.895997 master-0 kubenswrapper[7387]: I0308 03:16:29.895893 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:16:30.565706 master-0 kubenswrapper[7387]: I0308 03:16:30.565636 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:30.566054 master-0 kubenswrapper[7387]: I0308 03:16:30.565724 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:30.592498 master-0 kubenswrapper[7387]: I0308 03:16:30.592429 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/1.log" Mar 08 03:16:32.027510 master-0 kubenswrapper[7387]: E0308 03:16:32.027300 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-l2dj4.189abf1c8adf8c8f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-l2dj4,UID:7afe61b3-1460-48ed-9369-4d9893d2f4f4,APIVersion:v1,ResourceVersion:7533,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.633607311 +0000 UTC m=+69.028083002,LastTimestamp:2026-03-08 03:12:12.633607311 +0000 UTC m=+69.028083002,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:16:35.756952 master-0 kubenswrapper[7387]: E0308 03:16:35.756840 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 08 03:16:38.365883 master-0 kubenswrapper[7387]: E0308 03:16:38.365738 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:40.565839 master-0 kubenswrapper[7387]: I0308 03:16:40.565785 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:40.566409 master-0 kubenswrapper[7387]: I0308 03:16:40.565856 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:40.662764 master-0 kubenswrapper[7387]: I0308 03:16:40.662694 7387 generic.go:334] "Generic (PLEG): container finished" podID="3d69f101-60a8-41fd-bcda-4eb654c626a2" containerID="60e1587c9cf4a4020a136e8642e8046f93d54430d105f0f097e182d865618fc6" exitCode=0 Mar 08 03:16:46.451100 master-0 kubenswrapper[7387]: I0308 03:16:46.450954 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:16:46.705850 master-0 kubenswrapper[7387]: I0308 03:16:46.705637 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034" exitCode=0 Mar 08 03:16:46.708459 master-0 kubenswrapper[7387]: I0308 03:16:46.708408 7387 generic.go:334] "Generic (PLEG): container finished" podID="7af634f0-65ac-402a-acd6-a8aad11b37ab" containerID="af65ea05bf6d79301d65510b68a66fb2935b708f2ae46cc68e36995843b0c55c" exitCode=0 Mar 08 03:16:48.366741 master-0 kubenswrapper[7387]: E0308 03:16:48.366633 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:50.565715 master-0 kubenswrapper[7387]: I0308 03:16:50.565609 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:16:50.566595 master-0 kubenswrapper[7387]: I0308 03:16:50.565702 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:16:51.743956 master-0 kubenswrapper[7387]: I0308 03:16:51.743829 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="122d82dfb1bfd9c05bd161084f45586e27293d3320c13ab8454659ed4cdae5c0" exitCode=0 Mar 08 03:16:52.752821 master-0 kubenswrapper[7387]: I0308 03:16:52.752660 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/2.log" Mar 08 03:16:52.753651 master-0 kubenswrapper[7387]: I0308 03:16:52.753301 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/1.log" Mar 08 03:16:52.753823 master-0 kubenswrapper[7387]: I0308 03:16:52.753764 7387 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65" exitCode=255 Mar 08 03:16:52.758874 master-0 kubenswrapper[7387]: E0308 03:16:52.758799 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:16:53.008258 master-0 kubenswrapper[7387]: I0308 03:16:53.008163 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:53.008258 master-0 kubenswrapper[7387]: I0308 03:16:53.008245 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:16:53.304898 master-0 kubenswrapper[7387]: I0308 03:16:53.304687 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:16:53.381576 master-0 kubenswrapper[7387]: I0308 03:16:53.381439 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:53.381576 master-0 kubenswrapper[7387]: I0308 03:16:53.381507 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:16:54.769109 master-0 kubenswrapper[7387]: I0308 03:16:54.769045 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/0.log" Mar 08 03:16:54.770143 master-0 kubenswrapper[7387]: I0308 03:16:54.769114 7387 generic.go:334] "Generic (PLEG): container finished" podID="103158c5-c99f-4224-bf5a-e23b1aaf9172" containerID="a90adc87011fbb7cd1968febcefc0ce682e90d9df30e52bef5969b7cab457d60" exitCode=1 Mar 08 03:16:55.788569 master-0 kubenswrapper[7387]: I0308 03:16:55.788293 7387 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361" exitCode=0 Mar 08 03:16:56.008549 master-0 kubenswrapper[7387]: I0308 03:16:56.008440 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:56.008549 master-0 kubenswrapper[7387]: I0308 03:16:56.008536 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:16:56.381610 master-0 kubenswrapper[7387]: I0308 03:16:56.381460 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:56.381610 master-0 kubenswrapper[7387]: I0308 03:16:56.381543 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:16:56.450591 master-0 kubenswrapper[7387]: I0308 03:16:56.450505 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:16:57.805831 master-0 kubenswrapper[7387]: I0308 03:16:57.805759 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="c112ca6cd11ea4c9ce69d6d6d519c8fce15ec706e2d5984472b111b57942340d" exitCode=1 Mar 08 03:16:58.368272 master-0 kubenswrapper[7387]: E0308 03:16:58.368153 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:16:59.007987 master-0 kubenswrapper[7387]: I0308 03:16:59.007893 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:59.008786 master-0 kubenswrapper[7387]: I0308 03:16:59.008021 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:16:59.381211 master-0 kubenswrapper[7387]: I0308 03:16:59.381044 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:16:59.381211 master-0 kubenswrapper[7387]: I0308 03:16:59.381125 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:17:00.566283 master-0 kubenswrapper[7387]: I0308 03:17:00.566220 7387 patch_prober.go:28] interesting pod/controller-manager-77c5c9d7dd-xtftv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 08 03:17:00.567267 master-0 kubenswrapper[7387]: I0308 03:17:00.567071 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 08 03:17:02.381580 master-0 kubenswrapper[7387]: I0308 03:17:02.381477 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:17:02.381580 master-0 kubenswrapper[7387]: I0308 03:17:02.381560 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:17:02.843964 master-0 kubenswrapper[7387]: I0308 03:17:02.843838 7387 generic.go:334] "Generic (PLEG): container finished" podID="d82cf0db-0891-482d-856b-1675843042dd" containerID="500c7b149f4f2f095cf355a9cad0c5ca80a3d389709c1ca8a3ccda38df4eb432" exitCode=0 Mar 08 03:17:03.305193 master-0 kubenswrapper[7387]: I0308 03:17:03.305100 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:03.898866 master-0 kubenswrapper[7387]: E0308 03:17:03.898806 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:17:03.899749 master-0 kubenswrapper[7387]: E0308 03:17:03.899063 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 08 03:17:03.899749 master-0 kubenswrapper[7387]: I0308 03:17:03.899098 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:17:03.899749 master-0 kubenswrapper[7387]: I0308 03:17:03.899174 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:17:03.899749 master-0 kubenswrapper[7387]: I0308 03:17:03.899222 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:17:03.900274 master-0 kubenswrapper[7387]: I0308 03:17:03.900240 7387 scope.go:117] "RemoveContainer" containerID="546471fba50615e89619e415aa22b95c50bac9cc8ea20a1f87e7260bbf84e270" Mar 08 03:17:03.901958 master-0 kubenswrapper[7387]: I0308 03:17:03.901870 7387 scope.go:117] "RemoveContainer" containerID="122d82dfb1bfd9c05bd161084f45586e27293d3320c13ab8454659ed4cdae5c0" Mar 08 03:17:03.902681 master-0 kubenswrapper[7387]: I0308 03:17:03.902633 7387 scope.go:117] "RemoveContainer" containerID="83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65" Mar 08 03:17:03.903022 master-0 kubenswrapper[7387]: E0308 03:17:03.902975 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=etcd-operator pod=etcd-operator-5884b9cd56-dn4ll_openshift-etcd-operator(c6e4afd0-fbcd-49c7-9132-b54c9c28b74b)\"" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" Mar 08 03:17:03.903852 master-0 kubenswrapper[7387]: I0308 03:17:03.903805 7387 scope.go:117] "RemoveContainer" containerID="9a657401ad344c6bcb17809838c09bd965a31aa4d11aa9a3d44a7eea2ef4074b" Mar 08 03:17:03.904246 master-0 kubenswrapper[7387]: I0308 03:17:03.903966 7387 scope.go:117] "RemoveContainer" containerID="dc97f8f27bad8456e85d3556b0266da3f51b3219e17af7d58b019107138fa1da" Mar 08 03:17:03.904246 master-0 kubenswrapper[7387]: I0308 03:17:03.904233 7387 scope.go:117] "RemoveContainer" containerID="11de5739554b7c94cfe0fa61f3b1195f2e9f62f484bc837ca53fa9727626c6dd" Mar 08 03:17:03.904554 master-0 kubenswrapper[7387]: I0308 03:17:03.904326 7387 scope.go:117] "RemoveContainer" containerID="500c7b149f4f2f095cf355a9cad0c5ca80a3d389709c1ca8a3ccda38df4eb432" Mar 08 03:17:03.905329 master-0 kubenswrapper[7387]: I0308 03:17:03.904876 7387 scope.go:117] "RemoveContainer" containerID="101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc" Mar 08 03:17:03.909381 master-0 kubenswrapper[7387]: I0308 03:17:03.909235 7387 scope.go:117] "RemoveContainer" containerID="6a0ebfa9daddb42b992bf1e47626f21a3f530f0fb9ecbcd53e5eedae16779630" Mar 08 03:17:03.910546 master-0 kubenswrapper[7387]: I0308 03:17:03.910478 7387 scope.go:117] "RemoveContainer" containerID="af65ea05bf6d79301d65510b68a66fb2935b708f2ae46cc68e36995843b0c55c" Mar 08 03:17:03.910866 master-0 kubenswrapper[7387]: I0308 03:17:03.910822 7387 scope.go:117] "RemoveContainer" containerID="a90adc87011fbb7cd1968febcefc0ce682e90d9df30e52bef5969b7cab457d60" Mar 08 03:17:03.911503 master-0 kubenswrapper[7387]: I0308 03:17:03.911470 7387 scope.go:117] "RemoveContainer" containerID="e0aecb58f6976eba8696296a6b4880e419ddc1ff4060c7d5c4b00288d7622719" Mar 08 03:17:03.912214 master-0 kubenswrapper[7387]: I0308 03:17:03.912178 7387 scope.go:117] "RemoveContainer" containerID="ae6eee5afe5e46fa6bdda2c614fc3054391ae41ef6fbf435d604af42a3bf8ed4" Mar 08 03:17:03.913774 master-0 kubenswrapper[7387]: I0308 03:17:03.913610 7387 scope.go:117] "RemoveContainer" containerID="817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361" Mar 08 03:17:03.913848 master-0 kubenswrapper[7387]: I0308 03:17:03.913817 7387 scope.go:117] "RemoveContainer" containerID="722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d" Mar 08 03:17:03.914114 master-0 kubenswrapper[7387]: E0308 03:17:03.913883 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-olm-operator pod=cluster-olm-operator-77899cf6d-7vlmt_openshift-cluster-olm-operator(4711e21f-da6d-47ee-8722-64663e05de10)\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" podUID="4711e21f-da6d-47ee-8722-64663e05de10" Mar 08 03:17:03.915237 master-0 kubenswrapper[7387]: I0308 03:17:03.915023 7387 scope.go:117] "RemoveContainer" containerID="df227d89587fe4b6db1c506d3364812306abac68c1497c581534f430e3bbb731" Mar 08 03:17:03.916670 master-0 kubenswrapper[7387]: I0308 03:17:03.916626 7387 scope.go:117] "RemoveContainer" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" Mar 08 03:17:03.918236 master-0 kubenswrapper[7387]: I0308 03:17:03.918185 7387 scope.go:117] "RemoveContainer" containerID="60e1587c9cf4a4020a136e8642e8046f93d54430d105f0f097e182d865618fc6" Mar 08 03:17:03.920780 master-0 kubenswrapper[7387]: I0308 03:17:03.920716 7387 scope.go:117] "RemoveContainer" containerID="7f2168458d76e9e97ed4421cfc89aa215f737c7dfdedd5442acd38bfb2f3b2c4" Mar 08 03:17:03.933102 master-0 kubenswrapper[7387]: I0308 03:17:03.933009 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:17:04.861813 master-0 kubenswrapper[7387]: I0308 03:17:04.861766 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/1.log" Mar 08 03:17:04.864477 master-0 kubenswrapper[7387]: I0308 03:17:04.864397 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/1.log" Mar 08 03:17:04.871392 master-0 kubenswrapper[7387]: I0308 03:17:04.871341 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/0.log" Mar 08 03:17:04.873728 master-0 kubenswrapper[7387]: I0308 03:17:04.873669 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/1.log" Mar 08 03:17:04.876925 master-0 kubenswrapper[7387]: I0308 03:17:04.876868 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/0.log" Mar 08 03:17:04.879330 master-0 kubenswrapper[7387]: I0308 03:17:04.879295 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/1.log" Mar 08 03:17:04.893485 master-0 kubenswrapper[7387]: I0308 03:17:04.893442 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/1.log" Mar 08 03:17:04.894113 master-0 kubenswrapper[7387]: I0308 03:17:04.894073 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/0.log" Mar 08 03:17:04.898299 master-0 kubenswrapper[7387]: I0308 03:17:04.898267 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/1.log" Mar 08 03:17:04.901087 master-0 kubenswrapper[7387]: I0308 03:17:04.901045 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/1.log" Mar 08 03:17:04.904210 master-0 kubenswrapper[7387]: I0308 03:17:04.904178 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/1.log" Mar 08 03:17:04.905294 master-0 kubenswrapper[7387]: I0308 03:17:04.905243 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/0.log" Mar 08 03:17:04.907801 master-0 kubenswrapper[7387]: I0308 03:17:04.907754 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/1.log" Mar 08 03:17:04.908285 master-0 kubenswrapper[7387]: I0308 03:17:04.908252 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/0.log" Mar 08 03:17:06.030675 master-0 kubenswrapper[7387]: E0308 03:17:06.030485 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-ljh97.189abf1c8dfec6f0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ljh97,UID:4df5a48e-425c-443e-bfdf-6d57fe1e4638,APIVersion:v1,ResourceVersion:7704,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container: extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.68598552 +0000 UTC m=+69.080461201,LastTimestamp:2026-03-08 03:12:12.68598552 +0000 UTC m=+69.080461201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:17:06.451338 master-0 kubenswrapper[7387]: I0308 03:17:06.451143 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:08.369190 master-0 kubenswrapper[7387]: E0308 03:17:08.369094 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:17:08.369190 master-0 kubenswrapper[7387]: E0308 03:17:08.369163 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:17:09.760946 master-0 kubenswrapper[7387]: E0308 03:17:09.760829 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:17:10.087330 master-0 kubenswrapper[7387]: I0308 03:17:10.087127 7387 status_manager.go:851] "Failed to get status for pod" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 08 03:17:12.586183 master-0 kubenswrapper[7387]: I0308 03:17:12.586098 7387 scope.go:117] "RemoveContainer" containerID="8b175beb4b4b0f0ca1a091f7935455e85c66628fb2cebb53ac0ceffa81dfe13c" Mar 08 03:17:12.609269 master-0 kubenswrapper[7387]: I0308 03:17:12.609237 7387 scope.go:117] "RemoveContainer" containerID="ceef095090a1d3d01781b25cb0242da09fb6b070883bd9d80a5643827283dd10" Mar 08 03:17:12.628034 master-0 kubenswrapper[7387]: I0308 03:17:12.628002 7387 scope.go:117] "RemoveContainer" containerID="be2882c714bad91ca07c5f4fb9d9845ae081aa06f8fae77c04d5d862e91663ab" Mar 08 03:17:12.644653 master-0 kubenswrapper[7387]: I0308 03:17:12.644579 7387 scope.go:117] "RemoveContainer" containerID="e8ae217b16264d0a65f7a6526e393271363768450bd80231ec390001016f54d9" Mar 08 03:17:12.666094 master-0 kubenswrapper[7387]: I0308 03:17:12.666046 7387 scope.go:117] "RemoveContainer" containerID="2d9e906d444a87e8be6d10da1d15aed8fb665fe3a18c1a9658beaacb2dc08a71" Mar 08 03:17:12.684021 master-0 kubenswrapper[7387]: I0308 03:17:12.683954 7387 scope.go:117] "RemoveContainer" containerID="59842391c2f906e2a1d04139b13a4ad11d03d05812a1e42fe92cdb6ad399f2df" Mar 08 03:17:12.703727 master-0 kubenswrapper[7387]: I0308 03:17:12.703690 7387 scope.go:117] "RemoveContainer" containerID="3a9dc2434f3a5f5442ceae28b6a41707b31b23f92a0be759748599422ca97a2b" Mar 08 03:17:12.724563 master-0 kubenswrapper[7387]: I0308 03:17:12.724507 7387 scope.go:117] "RemoveContainer" containerID="d2717efe98dded98a430bdbb1e6c67542780e4d9e9da8780960f6cb5607dfa1c" Mar 08 03:17:12.744274 master-0 kubenswrapper[7387]: I0308 03:17:12.744238 7387 scope.go:117] "RemoveContainer" containerID="d287272d23a2bc7ff0f8d11895f5450b4df0a1fcc17b6293207d42ed15b1f661" Mar 08 03:17:13.304866 master-0 kubenswrapper[7387]: I0308 03:17:13.304747 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:16.450616 master-0 kubenswrapper[7387]: I0308 03:17:16.450517 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:16.930939 master-0 kubenswrapper[7387]: E0308 03:17:16.930820 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 08 03:17:22.040364 master-0 kubenswrapper[7387]: I0308 03:17:22.040274 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/0.log" Mar 08 03:17:22.041351 master-0 kubenswrapper[7387]: I0308 03:17:22.041010 7387 generic.go:334] "Generic (PLEG): container finished" podID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerID="61085a1c0f60df971fea9a09a95423c547ccb46d0bf74149a0614fd843a50e98" exitCode=1 Mar 08 03:17:26.450295 master-0 kubenswrapper[7387]: I0308 03:17:26.450193 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:26.763006 master-0 kubenswrapper[7387]: E0308 03:17:26.762858 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:17:26.841151 master-0 kubenswrapper[7387]: I0308 03:17:26.840956 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:26.841151 master-0 kubenswrapper[7387]: I0308 03:17:26.841020 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:26.841151 master-0 kubenswrapper[7387]: I0308 03:17:26.841080 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:26.841509 master-0 kubenswrapper[7387]: I0308 03:17:26.841172 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:28.394521 master-0 kubenswrapper[7387]: E0308 03:17:28.394165 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:17:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:17:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:17:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:17:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:17:33.126950 master-0 kubenswrapper[7387]: I0308 03:17:33.126867 7387 generic.go:334] "Generic (PLEG): container finished" podID="e2495994-736c-4916-b210-ff5633f3387d" containerID="d89cedfa5c6dd99c3607e2b41fd1a5a7721d2add34c9b3bd4ddfc268530aeaaf" exitCode=0 Mar 08 03:17:35.144774 master-0 kubenswrapper[7387]: I0308 03:17:35.144700 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/2.log" Mar 08 03:17:35.146548 master-0 kubenswrapper[7387]: I0308 03:17:35.146485 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/1.log" Mar 08 03:17:35.147482 master-0 kubenswrapper[7387]: I0308 03:17:35.147418 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/0.log" Mar 08 03:17:35.147613 master-0 kubenswrapper[7387]: I0308 03:17:35.147508 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="c6876a4a4ece00ccff5b60dc8a905f0f7de29a860707746f02e52710809c00e5" exitCode=1 Mar 08 03:17:36.451281 master-0 kubenswrapper[7387]: I0308 03:17:36.451195 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:36.842418 master-0 kubenswrapper[7387]: I0308 03:17:36.842341 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:36.842704 master-0 kubenswrapper[7387]: I0308 03:17:36.842437 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:36.842704 master-0 kubenswrapper[7387]: I0308 03:17:36.842507 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:36.842972 master-0 kubenswrapper[7387]: I0308 03:17:36.842702 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:37.936117 master-0 kubenswrapper[7387]: E0308 03:17:37.936063 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:17:37.937379 master-0 kubenswrapper[7387]: E0308 03:17:37.937294 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.037s" Mar 08 03:17:37.947572 master-0 kubenswrapper[7387]: I0308 03:17:37.947500 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:17:38.395609 master-0 kubenswrapper[7387]: E0308 03:17:38.395506 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:17:40.034104 master-0 kubenswrapper[7387]: E0308 03:17:40.033870 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-bv2v9.189abf1c8f164948 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-bv2v9,UID:10895809-a444-42ec-a41f-111e17f6beb3,APIVersion:v1,ResourceVersion:7523,FieldPath:spec.initContainers{extract-content},},Reason:Created,Message:Created container: extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.704303432 +0000 UTC m=+69.098779113,LastTimestamp:2026-03-08 03:12:12.704303432 +0000 UTC m=+69.098779113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:17:40.615540 master-0 kubenswrapper[7387]: I0308 03:17:40.615455 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:17:40.615808 master-0 kubenswrapper[7387]: I0308 03:17:40.615543 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:17:40.615808 master-0 kubenswrapper[7387]: I0308 03:17:40.615558 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:17:40.615808 master-0 kubenswrapper[7387]: I0308 03:17:40.615637 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:17:43.764399 master-0 kubenswrapper[7387]: E0308 03:17:43.764285 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:17:46.451130 master-0 kubenswrapper[7387]: I0308 03:17:46.451058 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:46.841875 master-0 kubenswrapper[7387]: I0308 03:17:46.841685 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:46.841875 master-0 kubenswrapper[7387]: I0308 03:17:46.841775 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:46.841875 master-0 kubenswrapper[7387]: I0308 03:17:46.841684 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:46.841875 master-0 kubenswrapper[7387]: I0308 03:17:46.841883 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:48.396597 master-0 kubenswrapper[7387]: E0308 03:17:48.396440 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:17:50.615947 master-0 kubenswrapper[7387]: I0308 03:17:50.615804 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:17:50.616974 master-0 kubenswrapper[7387]: I0308 03:17:50.616018 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:17:50.616974 master-0 kubenswrapper[7387]: I0308 03:17:50.615891 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:17:50.616974 master-0 kubenswrapper[7387]: I0308 03:17:50.616159 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:17:56.451702 master-0 kubenswrapper[7387]: I0308 03:17:56.451563 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:17:56.841763 master-0 kubenswrapper[7387]: I0308 03:17:56.841703 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:17:56.842069 master-0 kubenswrapper[7387]: I0308 03:17:56.841806 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:17:58.397480 master-0 kubenswrapper[7387]: E0308 03:17:58.397400 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:00.615167 master-0 kubenswrapper[7387]: I0308 03:18:00.615061 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:18:00.616011 master-0 kubenswrapper[7387]: I0308 03:18:00.615072 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:18:00.616011 master-0 kubenswrapper[7387]: I0308 03:18:00.615280 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:18:00.616011 master-0 kubenswrapper[7387]: I0308 03:18:00.615206 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:18:00.765785 master-0 kubenswrapper[7387]: E0308 03:18:00.765658 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:18:01.343957 master-0 kubenswrapper[7387]: I0308 03:18:01.343841 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/2.log" Mar 08 03:18:01.344654 master-0 kubenswrapper[7387]: I0308 03:18:01.344605 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/1.log" Mar 08 03:18:01.344741 master-0 kubenswrapper[7387]: I0308 03:18:01.344671 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d446527-f3fd-4a37-a980-7445031928d1" containerID="b009862d75dae9f3e9089264c59ffc33de04ddd735304db6fbfcc002f9536734" exitCode=255 Mar 08 03:18:06.450880 master-0 kubenswrapper[7387]: I0308 03:18:06.450758 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:18:06.841740 master-0 kubenswrapper[7387]: I0308 03:18:06.841630 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:18:06.842075 master-0 kubenswrapper[7387]: I0308 03:18:06.841737 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:18:08.398134 master-0 kubenswrapper[7387]: E0308 03:18:08.398026 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:08.398134 master-0 kubenswrapper[7387]: E0308 03:18:08.398090 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:18:10.088391 master-0 kubenswrapper[7387]: I0308 03:18:10.088272 7387 status_manager.go:851] "Failed to get status for pod" podUID="10895809-a444-42ec-a41f-111e17f6beb3" pod="openshift-marketplace/community-operators-bv2v9" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods community-operators-bv2v9)" Mar 08 03:18:10.615724 master-0 kubenswrapper[7387]: I0308 03:18:10.615656 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:18:10.615724 master-0 kubenswrapper[7387]: I0308 03:18:10.615723 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:18:11.950816 master-0 kubenswrapper[7387]: E0308 03:18:11.950737 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 08 03:18:11.952062 master-0 kubenswrapper[7387]: E0308 03:18:11.950971 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 08 03:18:11.952062 master-0 kubenswrapper[7387]: I0308 03:18:11.951004 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerDied","Data":"107e7aadbde6b65c42eb4756264c5507aea9b4627e7947de6f6b874799048d52"} Mar 08 03:18:11.952062 master-0 kubenswrapper[7387]: I0308 03:18:11.951101 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:18:11.952062 master-0 kubenswrapper[7387]: I0308 03:18:11.951123 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:18:11.952062 master-0 kubenswrapper[7387]: I0308 03:18:11.951213 7387 scope.go:117] "RemoveContainer" containerID="107e7aadbde6b65c42eb4756264c5507aea9b4627e7947de6f6b874799048d52" Mar 08 03:18:11.962298 master-0 kubenswrapper[7387]: I0308 03:18:11.962240 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:18:12.419384 master-0 kubenswrapper[7387]: I0308 03:18:12.419278 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/1.log" Mar 08 03:18:12.784244 master-0 kubenswrapper[7387]: I0308 03:18:12.784144 7387 scope.go:117] "RemoveContainer" containerID="444ccfffc52a5a8ffccee9bac8ab1880482309c7e1b3f7a74c0d255becf8fee0" Mar 08 03:18:12.809500 master-0 kubenswrapper[7387]: I0308 03:18:12.809427 7387 scope.go:117] "RemoveContainer" containerID="f3c0f05b8863cad41e739a3290ee1b766e3215209ff171cd04766d542d2cefd2" Mar 08 03:18:12.832287 master-0 kubenswrapper[7387]: I0308 03:18:12.832228 7387 scope.go:117] "RemoveContainer" containerID="768949e4d93a435cb37be6fb573bf2225669a3e078f13a7117be88e9456f605b" Mar 08 03:18:14.037433 master-0 kubenswrapper[7387]: E0308 03:18:14.037215 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-ljh97.189abf1c8f16d80f openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-ljh97,UID:4df5a48e-425c-443e-bfdf-6d57fe1e4638,APIVersion:v1,ResourceVersion:7704,FieldPath:spec.initContainers{extract-content},},Reason:Started,Message:Started container extract-content,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:12:12.704339983 +0000 UTC m=+69.098815664,LastTimestamp:2026-03-08 03:12:12.704339983 +0000 UTC m=+69.098815664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:18:15.007713 master-0 kubenswrapper[7387]: I0308 03:18:15.007518 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:15.008034 master-0 kubenswrapper[7387]: I0308 03:18:15.007711 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:16.450422 master-0 kubenswrapper[7387]: I0308 03:18:16.450344 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:18:16.841620 master-0 kubenswrapper[7387]: I0308 03:18:16.841541 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:18:16.841855 master-0 kubenswrapper[7387]: I0308 03:18:16.841650 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:18:17.767653 master-0 kubenswrapper[7387]: E0308 03:18:17.767535 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:18:20.615938 master-0 kubenswrapper[7387]: I0308 03:18:20.615844 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:18:20.616452 master-0 kubenswrapper[7387]: I0308 03:18:20.615999 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:18:23.494269 master-0 kubenswrapper[7387]: I0308 03:18:23.494210 7387 generic.go:334] "Generic (PLEG): container finished" podID="d2a53f3b-7e22-47eb-9f28-da3441b3662f" containerID="50e75d2b6ff206804802c9331065b3194c6e165af0a4d329ce7b16d5dd4ec36b" exitCode=0 Mar 08 03:18:25.007002 master-0 kubenswrapper[7387]: I0308 03:18:25.006518 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:25.007002 master-0 kubenswrapper[7387]: I0308 03:18:25.006651 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:26.450932 master-0 kubenswrapper[7387]: I0308 03:18:26.450830 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 08 03:18:26.840812 master-0 kubenswrapper[7387]: I0308 03:18:26.840682 7387 patch_prober.go:28] interesting pod/package-server-manager-854648ff6d-8qznw container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" start-of-body= Mar 08 03:18:26.840812 master-0 kubenswrapper[7387]: I0308 03:18:26.840766 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.26:8080/healthz\": dial tcp 10.128.0.26:8080: connect: connection refused" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: E0308 03:18:27.188625 7387 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.237s" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188672 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerDied","Data":"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0"} Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188707 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188811 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188822 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188831 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188840 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188851 7387 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="c2548fc6-dd78-4305-8e35-b0648dfd853f" Mar 08 03:18:27.189418 master-0 kubenswrapper[7387]: I0308 03:18:27.188862 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:18:27.190617 master-0 kubenswrapper[7387]: I0308 03:18:27.189686 7387 scope.go:117] "RemoveContainer" containerID="6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0" Mar 08 03:18:27.190617 master-0 kubenswrapper[7387]: I0308 03:18:27.190575 7387 scope.go:117] "RemoveContainer" containerID="c112ca6cd11ea4c9ce69d6d6d519c8fce15ec706e2d5984472b111b57942340d" Mar 08 03:18:27.190617 master-0 kubenswrapper[7387]: I0308 03:18:27.190612 7387 scope.go:117] "RemoveContainer" containerID="67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034" Mar 08 03:18:27.192069 master-0 kubenswrapper[7387]: I0308 03:18:27.191890 7387 scope.go:117] "RemoveContainer" containerID="b009862d75dae9f3e9089264c59ffc33de04ddd735304db6fbfcc002f9536734" Mar 08 03:18:27.201098 master-0 kubenswrapper[7387]: I0308 03:18:27.200856 7387 scope.go:117] "RemoveContainer" containerID="817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361" Mar 08 03:18:27.201289 master-0 kubenswrapper[7387]: I0308 03:18:27.201238 7387 scope.go:117] "RemoveContainer" containerID="c6876a4a4ece00ccff5b60dc8a905f0f7de29a860707746f02e52710809c00e5" Mar 08 03:18:27.202625 master-0 kubenswrapper[7387]: I0308 03:18:27.202250 7387 scope.go:117] "RemoveContainer" containerID="50e75d2b6ff206804802c9331065b3194c6e165af0a4d329ce7b16d5dd4ec36b" Mar 08 03:18:27.202955 master-0 kubenswrapper[7387]: I0308 03:18:27.202875 7387 scope.go:117] "RemoveContainer" containerID="61085a1c0f60df971fea9a09a95423c547ccb46d0bf74149a0614fd843a50e98" Mar 08 03:18:27.203062 master-0 kubenswrapper[7387]: I0308 03:18:27.202937 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 08 03:18:27.203397 master-0 kubenswrapper[7387]: I0308 03:18:27.203294 7387 scope.go:117] "RemoveContainer" containerID="83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65" Mar 08 03:18:27.207285 master-0 kubenswrapper[7387]: I0308 03:18:27.207202 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:18:27.207481 master-0 kubenswrapper[7387]: I0308 03:18:27.207302 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:18:27.207481 master-0 kubenswrapper[7387]: I0308 03:18:27.207332 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerDied","Data":"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044"} Mar 08 03:18:27.207481 master-0 kubenswrapper[7387]: I0308 03:18:27.207378 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:27.207481 master-0 kubenswrapper[7387]: I0308 03:18:27.207411 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 08 03:18:27.207481 master-0 kubenswrapper[7387]: I0308 03:18:27.207437 7387 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="c2548fc6-dd78-4305-8e35-b0648dfd853f" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207494 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207538 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207567 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207589 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207617 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207649 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207709 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207734 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerDied","Data":"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207767 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207798 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207829 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207853 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207878 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerStarted","Data":"7ee5b861c39dc6b2389534ffbe109ec1e2487bbf38c2ab8f456f84e12449168e"} Mar 08 03:18:27.207941 master-0 kubenswrapper[7387]: I0308 03:18:27.207943 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.207972 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.207994 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208021 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerStarted","Data":"6a0ebfa9daddb42b992bf1e47626f21a3f530f0fb9ecbcd53e5eedae16779630"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208047 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerStarted","Data":"7f2168458d76e9e97ed4421cfc89aa215f737c7dfdedd5442acd38bfb2f3b2c4"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208073 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"dc97f8f27bad8456e85d3556b0266da3f51b3219e17af7d58b019107138fa1da"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208100 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208130 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"546471fba50615e89619e415aa22b95c50bac9cc8ea20a1f87e7260bbf84e270"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208155 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0a2e5993-e0cb-4c63-9dda-abbb60bfe42b","Type":"ContainerDied","Data":"af1629d870a431db24e184fef7d2d042da3102cfaa950212d16542cff7e837ad"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208184 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1629d870a431db24e184fef7d2d042da3102cfaa950212d16542cff7e837ad" Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208207 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"0a8d4b89-fd81-4418-9f72-c8447fad86ad","Type":"ContainerDied","Data":"5e69232ee32af2930950dbc1ce8dd12459189b96461d880072fd507e99455d62"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208230 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e69232ee32af2930950dbc1ce8dd12459189b96461d880072fd507e99455d62" Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208252 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerDied","Data":"207b42b97b0cc7b2a3b3fe717f857e83a1274408fc29faf61812a15be3fc5f86"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208278 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerDied","Data":"df227d89587fe4b6db1c506d3364812306abac68c1497c581534f430e3bbb731"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208307 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerDied","Data":"847ec71b717fbc403d7670e2fb6fcb0eb16c5961bfffd67ba80ebb137144703d"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208335 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208364 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerDied","Data":"a8f3f14f501b72ff362550257f13a332eecf70ec4f446aeb3d199baf5fd9fcca"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208392 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208420 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerDied","Data":"11de5739554b7c94cfe0fa61f3b1195f2e9f62f484bc837ca53fa9727626c6dd"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208446 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"e0aecb58f6976eba8696296a6b4880e419ddc1ff4060c7d5c4b00288d7622719"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208469 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerStarted","Data":"0c7ee191b0d761ce93be93342e9e3606726dcf3941ed2cb569025a1100bcd65c"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208492 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208518 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"9a657401ad344c6bcb17809838c09bd965a31aa4d11aa9a3d44a7eea2ef4074b"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208546 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208571 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerDied","Data":"722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208596 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerDied","Data":"ae6eee5afe5e46fa6bdda2c614fc3054391ae41ef6fbf435d604af42a3bf8ed4"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208622 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208651 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208677 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerDied","Data":"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208702 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerDied","Data":"dc97f8f27bad8456e85d3556b0266da3f51b3219e17af7d58b019107138fa1da"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208729 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerDied","Data":"546471fba50615e89619e415aa22b95c50bac9cc8ea20a1f87e7260bbf84e270"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208754 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerDied","Data":"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408"} Mar 08 03:18:27.208733 master-0 kubenswrapper[7387]: I0308 03:18:27.208783 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerDied","Data":"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208809 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerDied","Data":"6a0ebfa9daddb42b992bf1e47626f21a3f530f0fb9ecbcd53e5eedae16779630"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208836 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerDied","Data":"7f2168458d76e9e97ed4421cfc89aa215f737c7dfdedd5442acd38bfb2f3b2c4"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208863 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerStarted","Data":"d67b7c07c51ae55685846daed44be4e4bc31d9601f7c2247d08f667ff264cd33"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208887 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208950 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerDied","Data":"9a657401ad344c6bcb17809838c09bd965a31aa4d11aa9a3d44a7eea2ef4074b"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.208979 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerDied","Data":"e0aecb58f6976eba8696296a6b4880e419ddc1ff4060c7d5c4b00288d7622719"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209005 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"c112ca6cd11ea4c9ce69d6d6d519c8fce15ec706e2d5984472b111b57942340d"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209031 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerStarted","Data":"1341190aa2856a973f485203a951081b82fd1c38dd7ccb12a11db05205beefcc"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209066 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"b009862d75dae9f3e9089264c59ffc33de04ddd735304db6fbfcc002f9536734"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209089 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerDied","Data":"60e1587c9cf4a4020a136e8642e8046f93d54430d105f0f097e182d865618fc6"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209118 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209144 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerDied","Data":"af65ea05bf6d79301d65510b68a66fb2935b708f2ae46cc68e36995843b0c55c"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209170 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"122d82dfb1bfd9c05bd161084f45586e27293d3320c13ab8454659ed4cdae5c0"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209196 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerDied","Data":"83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209226 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerDied","Data":"a90adc87011fbb7cd1968febcefc0ce682e90d9df30e52bef5969b7cab457d60"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209252 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerDied","Data":"817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209277 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"c112ca6cd11ea4c9ce69d6d6d519c8fce15ec706e2d5984472b111b57942340d"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209304 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" event={"ID":"d82cf0db-0891-482d-856b-1675843042dd","Type":"ContainerDied","Data":"500c7b149f4f2f095cf355a9cad0c5ca80a3d389709c1ca8a3ccda38df4eb432"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209332 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerStarted","Data":"7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209357 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209423 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209448 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" event={"ID":"d82cf0db-0891-482d-856b-1675843042dd","Type":"ContainerStarted","Data":"79789acd1e809055c0776529cff51e860873c6bb9594a823c34d658fe5d02349"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209473 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209497 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209523 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerStarted","Data":"7828a0e0fa2706d250ad69378649c5fb641ba621ee124550bb4757af01298f2e"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209546 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209574 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerStarted","Data":"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209598 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209621 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerStarted","Data":"35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209645 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerStarted","Data":"2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209669 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerStarted","Data":"3c9001c002bea8ae81641c5d4b6e3f763d09a9b2d453bd324d0fd602cf7b8d18"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209694 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerStarted","Data":"c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209719 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209745 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerStarted","Data":"5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209769 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"c6876a4a4ece00ccff5b60dc8a905f0f7de29a860707746f02e52710809c00e5"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209822 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209847 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209871 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209895 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209956 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.209981 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerDied","Data":"61085a1c0f60df971fea9a09a95423c547ccb46d0bf74149a0614fd843a50e98"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.210009 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerDied","Data":"d89cedfa5c6dd99c3607e2b41fd1a5a7721d2add34c9b3bd4ddfc268530aeaaf"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.210036 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"c6876a4a4ece00ccff5b60dc8a905f0f7de29a860707746f02e52710809c00e5"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.210064 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerDied","Data":"b009862d75dae9f3e9089264c59ffc33de04ddd735304db6fbfcc002f9536734"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.210099 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" event={"ID":"d2a53f3b-7e22-47eb-9f28-da3441b3662f","Type":"ContainerDied","Data":"50e75d2b6ff206804802c9331065b3194c6e165af0a4d329ce7b16d5dd4ec36b"} Mar 08 03:18:27.210762 master-0 kubenswrapper[7387]: I0308 03:18:27.210476 7387 scope.go:117] "RemoveContainer" containerID="d89cedfa5c6dd99c3607e2b41fd1a5a7721d2add34c9b3bd4ddfc268530aeaaf" Mar 08 03:18:27.215473 master-0 kubenswrapper[7387]: I0308 03:18:27.215346 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:18:27.222404 master-0 kubenswrapper[7387]: I0308 03:18:27.222346 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 08 03:18:27.224992 master-0 kubenswrapper[7387]: I0308 03:18:27.223723 7387 scope.go:117] "RemoveContainer" containerID="886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044" Mar 08 03:18:27.261725 master-0 kubenswrapper[7387]: I0308 03:18:27.259753 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l2dj4" podStartSLOduration=375.609565331 podStartE2EDuration="6m39.259681666s" podCreationTimestamp="2026-03-08 03:11:48 +0000 UTC" firstStartedPulling="2026-03-08 03:11:50.415818083 +0000 UTC m=+46.810293764" lastFinishedPulling="2026-03-08 03:12:14.065934368 +0000 UTC m=+70.460410099" observedRunningTime="2026-03-08 03:18:27.231147181 +0000 UTC m=+443.625622912" watchObservedRunningTime="2026-03-08 03:18:27.259681666 +0000 UTC m=+443.654157387" Mar 08 03:18:27.330551 master-0 kubenswrapper[7387]: I0308 03:18:27.330497 7387 scope.go:117] "RemoveContainer" containerID="5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b" Mar 08 03:18:27.351041 master-0 kubenswrapper[7387]: I0308 03:18:27.350975 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ljh97" podStartSLOduration=374.639587534 podStartE2EDuration="6m36.35095698s" podCreationTimestamp="2026-03-08 03:11:51 +0000 UTC" firstStartedPulling="2026-03-08 03:11:52.443469343 +0000 UTC m=+48.837945024" lastFinishedPulling="2026-03-08 03:12:14.154838759 +0000 UTC m=+70.549314470" observedRunningTime="2026-03-08 03:18:27.348556997 +0000 UTC m=+443.743032718" watchObservedRunningTime="2026-03-08 03:18:27.35095698 +0000 UTC m=+443.745432681" Mar 08 03:18:27.373300 master-0 kubenswrapper[7387]: I0308 03:18:27.373255 7387 scope.go:117] "RemoveContainer" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:18:27.423770 master-0 kubenswrapper[7387]: I0308 03:18:27.423712 7387 scope.go:117] "RemoveContainer" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" Mar 08 03:18:27.465732 master-0 kubenswrapper[7387]: I0308 03:18:27.465683 7387 scope.go:117] "RemoveContainer" containerID="8ab87543a0dca707df87062a9fccbc3d1ab6ac26bb171ba825afd502c52f108c" Mar 08 03:18:27.505316 master-0 kubenswrapper[7387]: I0308 03:18:27.505146 7387 scope.go:117] "RemoveContainer" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" Mar 08 03:18:27.521809 master-0 kubenswrapper[7387]: I0308 03:18:27.520723 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qwkmn" podStartSLOduration=375.825904339 podStartE2EDuration="6m37.520699329s" podCreationTimestamp="2026-03-08 03:11:50 +0000 UTC" firstStartedPulling="2026-03-08 03:11:52.43881607 +0000 UTC m=+48.833291761" lastFinishedPulling="2026-03-08 03:12:14.13361107 +0000 UTC m=+70.528086751" observedRunningTime="2026-03-08 03:18:27.51958533 +0000 UTC m=+443.914061031" watchObservedRunningTime="2026-03-08 03:18:27.520699329 +0000 UTC m=+443.915175010" Mar 08 03:18:27.533258 master-0 kubenswrapper[7387]: I0308 03:18:27.533206 7387 scope.go:117] "RemoveContainer" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" Mar 08 03:18:27.537498 master-0 kubenswrapper[7387]: I0308 03:18:27.537243 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/1.log" Mar 08 03:18:27.544740 master-0 kubenswrapper[7387]: I0308 03:18:27.544670 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerStarted","Data":"24027b59dda46d94a7e2a44f624ddff046a8eb2c97a011a50b8c8d2955a5f46d"} Mar 08 03:18:27.552391 master-0 kubenswrapper[7387]: I0308 03:18:27.552357 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/1.log" Mar 08 03:18:27.560659 master-0 kubenswrapper[7387]: I0308 03:18:27.560527 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/2.log" Mar 08 03:18:27.561413 master-0 kubenswrapper[7387]: I0308 03:18:27.560951 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/1.log" Mar 08 03:18:27.561413 master-0 kubenswrapper[7387]: I0308 03:18:27.561010 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"f7da8d6f43578f41e1847ca0341da34176f025a0cb8ed318bf310486d31635fa"} Mar 08 03:18:27.561646 master-0 kubenswrapper[7387]: I0308 03:18:27.561532 7387 scope.go:117] "RemoveContainer" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:18:27.562434 master-0 kubenswrapper[7387]: E0308 03:18:27.561817 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde\": container with ID starting with d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde not found: ID does not exist" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:18:27.562434 master-0 kubenswrapper[7387]: I0308 03:18:27.561845 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde"} err="failed to get container status \"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde\": rpc error: code = NotFound desc = could not find container \"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde\": container with ID starting with d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde not found: ID does not exist" Mar 08 03:18:27.562434 master-0 kubenswrapper[7387]: I0308 03:18:27.561861 7387 scope.go:117] "RemoveContainer" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" Mar 08 03:18:27.565325 master-0 kubenswrapper[7387]: E0308 03:18:27.565254 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a\": container with ID starting with e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a not found: ID does not exist" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" Mar 08 03:18:27.565325 master-0 kubenswrapper[7387]: I0308 03:18:27.565282 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a"} err="failed to get container status \"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a\": rpc error: code = NotFound desc = could not find container \"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a\": container with ID starting with e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a not found: ID does not exist" Mar 08 03:18:27.565325 master-0 kubenswrapper[7387]: I0308 03:18:27.565302 7387 scope.go:117] "RemoveContainer" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" Mar 08 03:18:27.569989 master-0 kubenswrapper[7387]: E0308 03:18:27.569500 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686\": container with ID starting with 5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686 not found: ID does not exist" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" Mar 08 03:18:27.569989 master-0 kubenswrapper[7387]: I0308 03:18:27.569555 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686"} err="failed to get container status \"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686\": rpc error: code = NotFound desc = could not find container \"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686\": container with ID starting with 5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686 not found: ID does not exist" Mar 08 03:18:27.569989 master-0 kubenswrapper[7387]: I0308 03:18:27.569610 7387 scope.go:117] "RemoveContainer" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" Mar 08 03:18:27.571172 master-0 kubenswrapper[7387]: E0308 03:18:27.570604 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22\": container with ID starting with 628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22 not found: ID does not exist" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" Mar 08 03:18:27.571172 master-0 kubenswrapper[7387]: I0308 03:18:27.570649 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22"} err="failed to get container status \"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22\": rpc error: code = NotFound desc = could not find container \"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22\": container with ID starting with 628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22 not found: ID does not exist" Mar 08 03:18:27.571172 master-0 kubenswrapper[7387]: I0308 03:18:27.570674 7387 scope.go:117] "RemoveContainer" containerID="0ece4a43051b1635cbb843e7e2b46319cb5de6a10e2de8626c1fb83227bc0d72" Mar 08 03:18:27.571172 master-0 kubenswrapper[7387]: I0308 03:18:27.571101 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerStarted","Data":"d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe"} Mar 08 03:18:27.572610 master-0 kubenswrapper[7387]: I0308 03:18:27.571940 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:18:27.572610 master-0 kubenswrapper[7387]: I0308 03:18:27.571998 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" start-of-body= Mar 08 03:18:27.572610 master-0 kubenswrapper[7387]: I0308 03:18:27.572026 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": dial tcp 10.128.0.49:8443: connect: connection refused" Mar 08 03:18:27.574337 master-0 kubenswrapper[7387]: I0308 03:18:27.574306 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" event={"ID":"d2a53f3b-7e22-47eb-9f28-da3441b3662f","Type":"ContainerStarted","Data":"24db9ff0b4f3a843d44fe7f7cb6ef1e2e1973a49778543d03a6faa68fce36a95"} Mar 08 03:18:27.589925 master-0 kubenswrapper[7387]: I0308 03:18:27.589860 7387 scope.go:117] "RemoveContainer" containerID="886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044" Mar 08 03:18:27.591847 master-0 kubenswrapper[7387]: E0308 03:18:27.591535 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044\": container with ID starting with 886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044 not found: ID does not exist" containerID="886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044" Mar 08 03:18:27.591847 master-0 kubenswrapper[7387]: I0308 03:18:27.591591 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044"} err="failed to get container status \"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044\": rpc error: code = NotFound desc = could not find container \"886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044\": container with ID starting with 886c2fa8f6da76d9fb13730fe0f87e8425e3e99bb43a5c2162f25e3c2baef044 not found: ID does not exist" Mar 08 03:18:27.591847 master-0 kubenswrapper[7387]: I0308 03:18:27.591628 7387 scope.go:117] "RemoveContainer" containerID="8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408" Mar 08 03:18:27.598560 master-0 kubenswrapper[7387]: I0308 03:18:27.598499 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/2.log" Mar 08 03:18:27.598679 master-0 kubenswrapper[7387]: I0308 03:18:27.598588 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93"} Mar 08 03:18:27.606427 master-0 kubenswrapper[7387]: I0308 03:18:27.604509 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/0.log" Mar 08 03:18:27.606427 master-0 kubenswrapper[7387]: I0308 03:18:27.606038 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:18:27.610942 master-0 kubenswrapper[7387]: I0308 03:18:27.609369 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/1.log" Mar 08 03:18:27.621092 master-0 kubenswrapper[7387]: I0308 03:18:27.620596 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/1.log" Mar 08 03:18:27.627883 master-0 kubenswrapper[7387]: I0308 03:18:27.627845 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:18:27.628606 master-0 kubenswrapper[7387]: I0308 03:18:27.628573 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:18:27.634053 master-0 kubenswrapper[7387]: I0308 03:18:27.634010 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 08 03:18:27.641117 master-0 kubenswrapper[7387]: I0308 03:18:27.640378 7387 scope.go:117] "RemoveContainer" containerID="0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c" Mar 08 03:18:27.671129 master-0 kubenswrapper[7387]: I0308 03:18:27.671087 7387 scope.go:117] "RemoveContainer" containerID="0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1" Mar 08 03:18:27.686979 master-0 kubenswrapper[7387]: I0308 03:18:27.686860 7387 scope.go:117] "RemoveContainer" containerID="5ea4d742313470919626ed619f63545042ece5a1573517854bb097c5ce7c3645" Mar 08 03:18:27.689292 master-0 kubenswrapper[7387]: I0308 03:18:27.689228 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bv2v9" podStartSLOduration=376.02918257 podStartE2EDuration="6m39.689207666s" podCreationTimestamp="2026-03-08 03:11:48 +0000 UTC" firstStartedPulling="2026-03-08 03:11:50.406802296 +0000 UTC m=+46.801277977" lastFinishedPulling="2026-03-08 03:12:14.066827352 +0000 UTC m=+70.461303073" observedRunningTime="2026-03-08 03:18:27.687356927 +0000 UTC m=+444.081832608" watchObservedRunningTime="2026-03-08 03:18:27.689207666 +0000 UTC m=+444.083683347" Mar 08 03:18:27.708831 master-0 kubenswrapper[7387]: I0308 03:18:27.708802 7387 scope.go:117] "RemoveContainer" containerID="97e7e8e1d4c76162fdd36f707ca3e2faaa5f8b65907e58ff8edb116f08fe408b" Mar 08 03:18:27.743056 master-0 kubenswrapper[7387]: I0308 03:18:27.743018 7387 scope.go:117] "RemoveContainer" containerID="6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0" Mar 08 03:18:27.744967 master-0 kubenswrapper[7387]: E0308 03:18:27.744895 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0\": container with ID starting with 6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0 not found: ID does not exist" containerID="6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0" Mar 08 03:18:27.745050 master-0 kubenswrapper[7387]: I0308 03:18:27.745012 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0"} err="failed to get container status \"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0\": rpc error: code = NotFound desc = could not find container \"6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0\": container with ID starting with 6dd339bdc8fcd78151718754ca62bc4f1f1c78bc15d5ff2223af1f5068c80ca0 not found: ID does not exist" Mar 08 03:18:27.745095 master-0 kubenswrapper[7387]: I0308 03:18:27.745052 7387 scope.go:117] "RemoveContainer" containerID="5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b" Mar 08 03:18:27.746312 master-0 kubenswrapper[7387]: E0308 03:18:27.746279 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b\": container with ID starting with 5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b not found: ID does not exist" containerID="5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b" Mar 08 03:18:27.746377 master-0 kubenswrapper[7387]: I0308 03:18:27.746312 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b"} err="failed to get container status \"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b\": rpc error: code = NotFound desc = could not find container \"5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b\": container with ID starting with 5344ff16fad08b9c5182bd478544ec5dffdc1907ce0e2bdd88e67cb83807f31b not found: ID does not exist" Mar 08 03:18:27.746377 master-0 kubenswrapper[7387]: I0308 03:18:27.746333 7387 scope.go:117] "RemoveContainer" containerID="8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408" Mar 08 03:18:27.747036 master-0 kubenswrapper[7387]: E0308 03:18:27.747000 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408\": container with ID starting with 8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408 not found: ID does not exist" containerID="8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408" Mar 08 03:18:27.747114 master-0 kubenswrapper[7387]: I0308 03:18:27.747043 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408"} err="failed to get container status \"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408\": rpc error: code = NotFound desc = could not find container \"8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408\": container with ID starting with 8b17e447bf7b73e8b40c761d522dbc3e0c3fec36ea0bf2258e54193e9736e408 not found: ID does not exist" Mar 08 03:18:27.747155 master-0 kubenswrapper[7387]: I0308 03:18:27.747117 7387 scope.go:117] "RemoveContainer" containerID="0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c" Mar 08 03:18:27.749533 master-0 kubenswrapper[7387]: E0308 03:18:27.749496 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c\": container with ID starting with 0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c not found: ID does not exist" containerID="0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c" Mar 08 03:18:27.749617 master-0 kubenswrapper[7387]: I0308 03:18:27.749537 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c"} err="failed to get container status \"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c\": rpc error: code = NotFound desc = could not find container \"0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c\": container with ID starting with 0757df7640b506a1487a772cc33679d0f06cf17369689e5bb19ed682a933347c not found: ID does not exist" Mar 08 03:18:27.749617 master-0 kubenswrapper[7387]: I0308 03:18:27.749584 7387 scope.go:117] "RemoveContainer" containerID="d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde" Mar 08 03:18:27.759725 master-0 kubenswrapper[7387]: I0308 03:18:27.759657 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde"} err="failed to get container status \"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde\": rpc error: code = NotFound desc = could not find container \"d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde\": container with ID starting with d3c34dc0fc7fb67fe9086e5e2aa23fc62ce1243d54049c653663a3e880adacde not found: ID does not exist" Mar 08 03:18:27.759776 master-0 kubenswrapper[7387]: I0308 03:18:27.759733 7387 scope.go:117] "RemoveContainer" containerID="e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a" Mar 08 03:18:27.760466 master-0 kubenswrapper[7387]: I0308 03:18:27.760398 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a"} err="failed to get container status \"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a\": rpc error: code = NotFound desc = could not find container \"e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a\": container with ID starting with e5123ed41e1d59409ecb7b6a093fbec053e4ad2aa3edc24c60e3ea460620ff6a not found: ID does not exist" Mar 08 03:18:27.760514 master-0 kubenswrapper[7387]: I0308 03:18:27.760465 7387 scope.go:117] "RemoveContainer" containerID="5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686" Mar 08 03:18:27.760920 master-0 kubenswrapper[7387]: I0308 03:18:27.760866 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686"} err="failed to get container status \"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686\": rpc error: code = NotFound desc = could not find container \"5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686\": container with ID starting with 5cdb4a8c29f6fa12b785f47ff7ebb0b695874d4bc9ab1dc427c074f4ca967686 not found: ID does not exist" Mar 08 03:18:27.760961 master-0 kubenswrapper[7387]: I0308 03:18:27.760918 7387 scope.go:117] "RemoveContainer" containerID="628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22" Mar 08 03:18:27.761215 master-0 kubenswrapper[7387]: I0308 03:18:27.761180 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22"} err="failed to get container status \"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22\": rpc error: code = NotFound desc = could not find container \"628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22\": container with ID starting with 628092a132c01e680651df338ffa2bf9cd4d31a076b8f10e995aa7b3b2bc5d22 not found: ID does not exist" Mar 08 03:18:27.761215 master-0 kubenswrapper[7387]: I0308 03:18:27.761207 7387 scope.go:117] "RemoveContainer" containerID="0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1" Mar 08 03:18:27.761487 master-0 kubenswrapper[7387]: E0308 03:18:27.761452 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1\": container with ID starting with 0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1 not found: ID does not exist" containerID="0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1" Mar 08 03:18:27.761522 master-0 kubenswrapper[7387]: I0308 03:18:27.761482 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1"} err="failed to get container status \"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1\": rpc error: code = NotFound desc = could not find container \"0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1\": container with ID starting with 0529887e94e92870d8170b7a6f9ac44c1a9e4434031edde3aa2a6844aae2f3c1 not found: ID does not exist" Mar 08 03:18:27.780218 master-0 kubenswrapper[7387]: I0308 03:18:27.780150 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" path="/var/lib/kubelet/pods/8b8c5365-e7a0-4f69-913f-2e12b142e4a5/volumes" Mar 08 03:18:27.780763 master-0 kubenswrapper[7387]: I0308 03:18:27.780729 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" path="/var/lib/kubelet/pods/d9732f3d-49d0-4400-ab54-ce029c49ec37/volumes" Mar 08 03:18:28.626354 master-0 kubenswrapper[7387]: I0308 03:18:28.626263 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:28.627178 master-0 kubenswrapper[7387]: I0308 03:18:28.626463 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:28.634110 master-0 kubenswrapper[7387]: I0308 03:18:28.634054 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/1.log" Mar 08 03:18:28.638030 master-0 kubenswrapper[7387]: I0308 03:18:28.637975 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/1.log" Mar 08 03:18:28.641889 master-0 kubenswrapper[7387]: I0308 03:18:28.641838 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/1.log" Mar 08 03:18:28.644724 master-0 kubenswrapper[7387]: I0308 03:18:28.644669 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/0.log" Mar 08 03:18:28.645418 master-0 kubenswrapper[7387]: I0308 03:18:28.645351 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerStarted","Data":"c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a"} Mar 08 03:18:28.649429 master-0 kubenswrapper[7387]: I0308 03:18:28.649380 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c"} Mar 08 03:18:28.649505 master-0 kubenswrapper[7387]: I0308 03:18:28.649438 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2"} Mar 08 03:18:28.653985 master-0 kubenswrapper[7387]: I0308 03:18:28.653888 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/2.log" Mar 08 03:18:28.654470 master-0 kubenswrapper[7387]: I0308 03:18:28.654412 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3"} Mar 08 03:18:28.657486 master-0 kubenswrapper[7387]: I0308 03:18:28.657408 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/2.log" Mar 08 03:18:28.660184 master-0 kubenswrapper[7387]: I0308 03:18:28.660125 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/2.log" Mar 08 03:18:28.766186 master-0 kubenswrapper[7387]: E0308 03:18:28.765849 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:18:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:18:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:18:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:18:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:ae042a5d32eb2f18d537f2068849e665b55df7d8360daedaaeea98bd2a79e769\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:d077bbabe6cb885ed229119008480493e8364e4bfddaa00b099f68c52b016e6b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1733328350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:063b8972231e65eb43f6545ba37804f68138dc54d97b91a652a1c5bc7dc76aa5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:cf682d23b2857e455609879a0867d171a221c18e2cec995dd79570b77c5a4705\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1272201949},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e0c034ae18daa01af8d073f8cc24ae4af87883c664304910eab1167fdfd60c0b\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ef0c6b9e405f7a452211e063ce07ded04ccbe38b53860bfd71b5a7cd5072830a\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1229556414},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:79984dfbdf9aeae3985c7fd7515e12328775c0e7fc4782929d0998f4dd2a87c6\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7be89499615ec913d0fe40ca89682080a3f1181a066dbc501c877cc7ccbcc9ae\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1220167376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Mar 08 03:18:28.872500 master-0 kubenswrapper[7387]: I0308 03:18:28.872393 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 03:18:29.662029 master-0 kubenswrapper[7387]: I0308 03:18:29.661875 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:29.662029 master-0 kubenswrapper[7387]: I0308 03:18:29.661995 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:29.663457 master-0 kubenswrapper[7387]: I0308 03:18:29.663262 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:29.663457 master-0 kubenswrapper[7387]: I0308 03:18:29.663335 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:30.009100 master-0 kubenswrapper[7387]: I0308 03:18:30.009001 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:30.009380 master-0 kubenswrapper[7387]: I0308 03:18:30.009095 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:30.664228 master-0 kubenswrapper[7387]: I0308 03:18:30.664095 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:30.664228 master-0 kubenswrapper[7387]: I0308 03:18:30.664180 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:30.673614 master-0 kubenswrapper[7387]: I0308 03:18:30.673566 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:30.673731 master-0 kubenswrapper[7387]: I0308 03:18:30.673612 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:31.449236 master-0 kubenswrapper[7387]: I0308 03:18:31.449138 7387 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-dn4ll container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 08 03:18:31.449523 master-0 kubenswrapper[7387]: I0308 03:18:31.449231 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 08 03:18:31.449523 master-0 kubenswrapper[7387]: I0308 03:18:31.449325 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:18:31.450200 master-0 kubenswrapper[7387]: I0308 03:18:31.450139 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="etcd-operator" containerStatusID={"Type":"cri-o","ID":"c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3"} pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" containerMessage="Container etcd-operator failed liveness probe, will be restarted" Mar 08 03:18:31.450303 master-0 kubenswrapper[7387]: I0308 03:18:31.450212 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerName="etcd-operator" containerID="cri-o://c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3" gracePeriod=30 Mar 08 03:18:31.674441 master-0 kubenswrapper[7387]: I0308 03:18:31.674338 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:31.674441 master-0 kubenswrapper[7387]: I0308 03:18:31.674429 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:32.695255 master-0 kubenswrapper[7387]: I0308 03:18:32.695078 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/3.log" Mar 08 03:18:32.697073 master-0 kubenswrapper[7387]: I0308 03:18:32.697019 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/2.log" Mar 08 03:18:32.697238 master-0 kubenswrapper[7387]: I0308 03:18:32.697111 7387 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3" exitCode=255 Mar 08 03:18:32.697238 master-0 kubenswrapper[7387]: I0308 03:18:32.697178 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerDied","Data":"c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3"} Mar 08 03:18:32.697429 master-0 kubenswrapper[7387]: I0308 03:18:32.697268 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"ba71a05bad6a20ee6c802a92e9435b17cd722af277a98de423aa90bee7e17757"} Mar 08 03:18:32.697429 master-0 kubenswrapper[7387]: I0308 03:18:32.697319 7387 scope.go:117] "RemoveContainer" containerID="83e1d070000e62345139ef045f8a5e382a6175a1f7868ac9989b2dfe38a06c65" Mar 08 03:18:33.008871 master-0 kubenswrapper[7387]: I0308 03:18:33.008816 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:33.009323 master-0 kubenswrapper[7387]: I0308 03:18:33.009263 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:33.320049 master-0 kubenswrapper[7387]: I0308 03:18:33.319843 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:33.380564 master-0 kubenswrapper[7387]: I0308 03:18:33.380440 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:33.380946 master-0 kubenswrapper[7387]: I0308 03:18:33.380555 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:33.704688 master-0 kubenswrapper[7387]: I0308 03:18:33.704570 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/3.log" Mar 08 03:18:33.739217 master-0 kubenswrapper[7387]: I0308 03:18:33.739135 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:33.873226 master-0 kubenswrapper[7387]: I0308 03:18:33.873154 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 03:18:33.903403 master-0 kubenswrapper[7387]: I0308 03:18:33.903352 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 03:18:34.561891 master-0 kubenswrapper[7387]: I0308 03:18:34.561751 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:18:34.640133 master-0 kubenswrapper[7387]: I0308 03:18:34.640012 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:34.732169 master-0 kubenswrapper[7387]: E0308 03:18:34.732050 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 08 03:18:34.733149 master-0 kubenswrapper[7387]: I0308 03:18:34.732467 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 03:18:34.768583 master-0 kubenswrapper[7387]: E0308 03:18:34.768476 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:18:34.782860 master-0 kubenswrapper[7387]: I0308 03:18:34.782767 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.782740782 podStartE2EDuration="782.740782ms" podCreationTimestamp="2026-03-08 03:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:18:34.776284341 +0000 UTC m=+451.170760112" watchObservedRunningTime="2026-03-08 03:18:34.782740782 +0000 UTC m=+451.177216493" Mar 08 03:18:35.007705 master-0 kubenswrapper[7387]: I0308 03:18:35.007508 7387 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-k8xgg container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:35.008032 master-0 kubenswrapper[7387]: I0308 03:18:35.007739 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:35.008032 master-0 kubenswrapper[7387]: I0308 03:18:35.007821 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:18:35.008032 master-0 kubenswrapper[7387]: I0308 03:18:35.007986 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:35.008235 master-0 kubenswrapper[7387]: I0308 03:18:35.008053 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:35.008235 master-0 kubenswrapper[7387]: I0308 03:18:35.008119 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:18:35.008736 master-0 kubenswrapper[7387]: I0308 03:18:35.008679 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 08 03:18:35.008811 master-0 kubenswrapper[7387]: I0308 03:18:35.008742 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" podUID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerName="authentication-operator" containerID="cri-o://dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc" gracePeriod=30 Mar 08 03:18:35.008811 master-0 kubenswrapper[7387]: I0308 03:18:35.008783 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 08 03:18:35.009218 master-0 kubenswrapper[7387]: I0308 03:18:35.008831 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" containerID="cri-o://af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea" gracePeriod=30 Mar 08 03:18:35.010398 master-0 kubenswrapper[7387]: I0308 03:18:35.010337 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:35.011473 master-0 kubenswrapper[7387]: I0308 03:18:35.010392 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:35.381030 master-0 kubenswrapper[7387]: I0308 03:18:35.380975 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:35.381193 master-0 kubenswrapper[7387]: I0308 03:18:35.381045 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:35.717817 master-0 kubenswrapper[7387]: I0308 03:18:35.717766 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/2.log" Mar 08 03:18:35.718500 master-0 kubenswrapper[7387]: I0308 03:18:35.718462 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/1.log" Mar 08 03:18:35.718565 master-0 kubenswrapper[7387]: I0308 03:18:35.718532 7387 generic.go:334] "Generic (PLEG): container finished" podID="2468d2a3-ec65-4888-a86a-3f66fa311f56" containerID="c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d" exitCode=255 Mar 08 03:18:35.718694 master-0 kubenswrapper[7387]: I0308 03:18:35.718628 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerDied","Data":"c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d"} Mar 08 03:18:35.718771 master-0 kubenswrapper[7387]: I0308 03:18:35.718750 7387 scope.go:117] "RemoveContainer" containerID="e0aecb58f6976eba8696296a6b4880e419ddc1ff4060c7d5c4b00288d7622719" Mar 08 03:18:35.719310 master-0 kubenswrapper[7387]: I0308 03:18:35.719287 7387 scope.go:117] "RemoveContainer" containerID="c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d" Mar 08 03:18:35.719754 master-0 kubenswrapper[7387]: E0308 03:18:35.719573 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-xtwpr_openshift-kube-controller-manager-operator(2468d2a3-ec65-4888-a86a-3f66fa311f56)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" podUID="2468d2a3-ec65-4888-a86a-3f66fa311f56" Mar 08 03:18:35.721204 master-0 kubenswrapper[7387]: I0308 03:18:35.720822 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/1.log" Mar 08 03:18:35.722172 master-0 kubenswrapper[7387]: I0308 03:18:35.721892 7387 generic.go:334] "Generic (PLEG): container finished" podID="3d69f101-60a8-41fd-bcda-4eb654c626a2" containerID="35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce" exitCode=255 Mar 08 03:18:35.722172 master-0 kubenswrapper[7387]: I0308 03:18:35.721954 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerDied","Data":"35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce"} Mar 08 03:18:35.722946 master-0 kubenswrapper[7387]: I0308 03:18:35.722875 7387 scope.go:117] "RemoveContainer" containerID="35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce" Mar 08 03:18:35.723270 master-0 kubenswrapper[7387]: E0308 03:18:35.723226 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-snapshot-controller-operator pod=csi-snapshot-controller-operator-5685fbc7d-xbrdp_openshift-cluster-storage-operator(3d69f101-60a8-41fd-bcda-4eb654c626a2)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" podUID="3d69f101-60a8-41fd-bcda-4eb654c626a2" Mar 08 03:18:35.724520 master-0 kubenswrapper[7387]: I0308 03:18:35.724496 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/2.log" Mar 08 03:18:35.725031 master-0 kubenswrapper[7387]: I0308 03:18:35.725004 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/1.log" Mar 08 03:18:35.725109 master-0 kubenswrapper[7387]: I0308 03:18:35.725057 7387 generic.go:334] "Generic (PLEG): container finished" podID="1fa64f1b-9f10-488b-8f94-1600774062c4" containerID="c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54" exitCode=255 Mar 08 03:18:35.725154 master-0 kubenswrapper[7387]: I0308 03:18:35.725123 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerDied","Data":"c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54"} Mar 08 03:18:35.725638 master-0 kubenswrapper[7387]: I0308 03:18:35.725606 7387 scope.go:117] "RemoveContainer" containerID="c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54" Mar 08 03:18:35.725873 master-0 kubenswrapper[7387]: E0308 03:18:35.725813 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-69b6fc6b88-vjmf6_openshift-service-ca-operator(1fa64f1b-9f10-488b-8f94-1600774062c4)\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" podUID="1fa64f1b-9f10-488b-8f94-1600774062c4" Mar 08 03:18:35.726692 master-0 kubenswrapper[7387]: I0308 03:18:35.726669 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/2.log" Mar 08 03:18:35.727353 master-0 kubenswrapper[7387]: I0308 03:18:35.727245 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/1.log" Mar 08 03:18:35.727353 master-0 kubenswrapper[7387]: I0308 03:18:35.727310 7387 generic.go:334] "Generic (PLEG): container finished" podID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" containerID="5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b" exitCode=255 Mar 08 03:18:35.727457 master-0 kubenswrapper[7387]: I0308 03:18:35.727384 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerDied","Data":"5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b"} Mar 08 03:18:35.728402 master-0 kubenswrapper[7387]: I0308 03:18:35.727890 7387 scope.go:117] "RemoveContainer" containerID="5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b" Mar 08 03:18:35.728402 master-0 kubenswrapper[7387]: E0308 03:18:35.728119 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-h7lpf_openshift-controller-manager-operator(0722d9c3-77b8-4770-9171-d4aeba4b0cc7)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" podUID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" Mar 08 03:18:35.729817 master-0 kubenswrapper[7387]: I0308 03:18:35.729728 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/2.log" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.732282 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/1.log" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.732315 7387 generic.go:334] "Generic (PLEG): container finished" podID="89e15db4-c541-4d53-878d-706fa022f970" containerID="279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c" exitCode=255 Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.732358 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerDied","Data":"279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c"} Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.732620 7387 scope.go:117] "RemoveContainer" containerID="279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: E0308 03:18:35.732778 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-5c74bfc494-rz5c8_openshift-kube-scheduler-operator(89e15db4-c541-4d53-878d-706fa022f970)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" podUID="89e15db4-c541-4d53-878d-706fa022f970" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.734399 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/2.log" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.734689 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/1.log" Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.734711 7387 generic.go:334] "Generic (PLEG): container finished" podID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerID="dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc" exitCode=255 Mar 08 03:18:35.735143 master-0 kubenswrapper[7387]: I0308 03:18:35.734742 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerDied","Data":"dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc"} Mar 08 03:18:35.739073 master-0 kubenswrapper[7387]: I0308 03:18:35.739047 7387 scope.go:117] "RemoveContainer" containerID="60e1587c9cf4a4020a136e8642e8046f93d54430d105f0f097e182d865618fc6" Mar 08 03:18:35.740221 master-0 kubenswrapper[7387]: I0308 03:18:35.740154 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/2.log" Mar 08 03:18:35.744155 master-0 kubenswrapper[7387]: I0308 03:18:35.741239 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea" exitCode=255 Mar 08 03:18:35.744155 master-0 kubenswrapper[7387]: I0308 03:18:35.741292 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea"} Mar 08 03:18:35.744155 master-0 kubenswrapper[7387]: I0308 03:18:35.741319 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9"} Mar 08 03:18:35.744155 master-0 kubenswrapper[7387]: I0308 03:18:35.741685 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:18:35.745662 master-0 kubenswrapper[7387]: I0308 03:18:35.745632 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/2.log" Mar 08 03:18:35.746157 master-0 kubenswrapper[7387]: I0308 03:18:35.746137 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/1.log" Mar 08 03:18:35.746217 master-0 kubenswrapper[7387]: I0308 03:18:35.746190 7387 generic.go:334] "Generic (PLEG): container finished" podID="89fc77c9-b444-4828-8a35-c63ea9335245" containerID="2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d" exitCode=255 Mar 08 03:18:35.746290 master-0 kubenswrapper[7387]: I0308 03:18:35.746267 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerDied","Data":"2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d"} Mar 08 03:18:35.746874 master-0 kubenswrapper[7387]: I0308 03:18:35.746836 7387 scope.go:117] "RemoveContainer" containerID="2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d" Mar 08 03:18:35.747111 master-0 kubenswrapper[7387]: E0308 03:18:35.747075 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=network-operator pod=network-operator-7c649bf6d4-wxrfp_openshift-network-operator(89fc77c9-b444-4828-8a35-c63ea9335245)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" podUID="89fc77c9-b444-4828-8a35-c63ea9335245" Mar 08 03:18:35.754665 master-0 kubenswrapper[7387]: I0308 03:18:35.754613 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/2.log" Mar 08 03:18:35.755241 master-0 kubenswrapper[7387]: I0308 03:18:35.755211 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/1.log" Mar 08 03:18:35.755308 master-0 kubenswrapper[7387]: I0308 03:18:35.755260 7387 generic.go:334] "Generic (PLEG): container finished" podID="5a058138-8039-4841-821b-7ee5bb8648e4" containerID="72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758" exitCode=255 Mar 08 03:18:35.755353 master-0 kubenswrapper[7387]: I0308 03:18:35.755330 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerDied","Data":"72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758"} Mar 08 03:18:35.755961 master-0 kubenswrapper[7387]: I0308 03:18:35.755884 7387 scope.go:117] "RemoveContainer" containerID="72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758" Mar 08 03:18:35.756155 master-0 kubenswrapper[7387]: E0308 03:18:35.756120 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-zcr8w_openshift-kube-apiserver-operator(5a058138-8039-4841-821b-7ee5bb8648e4)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" podUID="5a058138-8039-4841-821b-7ee5bb8648e4" Mar 08 03:18:35.757765 master-0 kubenswrapper[7387]: I0308 03:18:35.757726 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/2.log" Mar 08 03:18:35.758185 master-0 kubenswrapper[7387]: I0308 03:18:35.758135 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/1.log" Mar 08 03:18:35.758185 master-0 kubenswrapper[7387]: I0308 03:18:35.758173 7387 generic.go:334] "Generic (PLEG): container finished" podID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" containerID="1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203" exitCode=255 Mar 08 03:18:35.758894 master-0 kubenswrapper[7387]: I0308 03:18:35.758855 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerDied","Data":"1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203"} Mar 08 03:18:35.759162 master-0 kubenswrapper[7387]: I0308 03:18:35.759128 7387 scope.go:117] "RemoveContainer" containerID="1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203" Mar 08 03:18:35.759590 master-0 kubenswrapper[7387]: E0308 03:18:35.759287 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-799b6db4d7-gstfr_openshift-apiserver-operator(2a506cf6-bc39-4089-9caa-4c14c4d15c11)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" podUID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" Mar 08 03:18:35.762625 master-0 kubenswrapper[7387]: I0308 03:18:35.762594 7387 scope.go:117] "RemoveContainer" containerID="7f2168458d76e9e97ed4421cfc89aa215f737c7dfdedd5442acd38bfb2f3b2c4" Mar 08 03:18:35.785280 master-0 kubenswrapper[7387]: I0308 03:18:35.785213 7387 scope.go:117] "RemoveContainer" containerID="df227d89587fe4b6db1c506d3364812306abac68c1497c581534f430e3bbb731" Mar 08 03:18:35.809580 master-0 kubenswrapper[7387]: I0308 03:18:35.809544 7387 scope.go:117] "RemoveContainer" containerID="9a657401ad344c6bcb17809838c09bd965a31aa4d11aa9a3d44a7eea2ef4074b" Mar 08 03:18:35.843444 master-0 kubenswrapper[7387]: I0308 03:18:35.843399 7387 scope.go:117] "RemoveContainer" containerID="722547003e9f3cd7874fd4300454109695088229261fd8d771f182d81e20178d" Mar 08 03:18:35.875979 master-0 kubenswrapper[7387]: I0308 03:18:35.875928 7387 scope.go:117] "RemoveContainer" containerID="122d82dfb1bfd9c05bd161084f45586e27293d3320c13ab8454659ed4cdae5c0" Mar 08 03:18:35.902005 master-0 kubenswrapper[7387]: I0308 03:18:35.901931 7387 scope.go:117] "RemoveContainer" containerID="6a0ebfa9daddb42b992bf1e47626f21a3f530f0fb9ecbcd53e5eedae16779630" Mar 08 03:18:35.995325 master-0 kubenswrapper[7387]: I0308 03:18:35.995271 7387 scope.go:117] "RemoveContainer" containerID="dc97f8f27bad8456e85d3556b0266da3f51b3219e17af7d58b019107138fa1da" Mar 08 03:18:36.020865 master-0 kubenswrapper[7387]: I0308 03:18:36.020498 7387 scope.go:117] "RemoveContainer" containerID="546471fba50615e89619e415aa22b95c50bac9cc8ea20a1f87e7260bbf84e270" Mar 08 03:18:36.449450 master-0 kubenswrapper[7387]: I0308 03:18:36.449338 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:36.740308 master-0 kubenswrapper[7387]: I0308 03:18:36.740061 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:36.768542 master-0 kubenswrapper[7387]: I0308 03:18:36.768451 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/2.log" Mar 08 03:18:36.771264 master-0 kubenswrapper[7387]: I0308 03:18:36.771190 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/2.log" Mar 08 03:18:36.774318 master-0 kubenswrapper[7387]: I0308 03:18:36.774283 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-jnpl5_7af634f0-65ac-402a-acd6-a8aad11b37ab/service-ca-controller/1.log" Mar 08 03:18:36.775310 master-0 kubenswrapper[7387]: I0308 03:18:36.775129 7387 generic.go:334] "Generic (PLEG): container finished" podID="7af634f0-65ac-402a-acd6-a8aad11b37ab" containerID="7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be" exitCode=255 Mar 08 03:18:36.775310 master-0 kubenswrapper[7387]: I0308 03:18:36.775226 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerDied","Data":"7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be"} Mar 08 03:18:36.775580 master-0 kubenswrapper[7387]: I0308 03:18:36.775322 7387 scope.go:117] "RemoveContainer" containerID="af65ea05bf6d79301d65510b68a66fb2935b708f2ae46cc68e36995843b0c55c" Mar 08 03:18:36.776397 master-0 kubenswrapper[7387]: I0308 03:18:36.775979 7387 scope.go:117] "RemoveContainer" containerID="7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be" Mar 08 03:18:36.776397 master-0 kubenswrapper[7387]: E0308 03:18:36.776227 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-controller pod=service-ca-84bfdbbb7f-jnpl5_openshift-service-ca(7af634f0-65ac-402a-acd6-a8aad11b37ab)\"" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" podUID="7af634f0-65ac-402a-acd6-a8aad11b37ab" Mar 08 03:18:36.778133 master-0 kubenswrapper[7387]: I0308 03:18:36.777937 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/2.log" Mar 08 03:18:36.781826 master-0 kubenswrapper[7387]: I0308 03:18:36.781755 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/2.log" Mar 08 03:18:36.782032 master-0 kubenswrapper[7387]: I0308 03:18:36.781972 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"5c0ec338f20c1d3f7f3579ad9e29304940d141e2ae52320c796bdc9c2392d2b5"} Mar 08 03:18:36.784599 master-0 kubenswrapper[7387]: I0308 03:18:36.784533 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/2.log" Mar 08 03:18:36.787541 master-0 kubenswrapper[7387]: I0308 03:18:36.787480 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/1.log" Mar 08 03:18:36.790588 master-0 kubenswrapper[7387]: I0308 03:18:36.790529 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/2.log" Mar 08 03:18:36.793265 master-0 kubenswrapper[7387]: I0308 03:18:36.793194 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/2.log" Mar 08 03:18:36.796202 master-0 kubenswrapper[7387]: I0308 03:18:36.796103 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/2.log" Mar 08 03:18:36.798948 master-0 kubenswrapper[7387]: I0308 03:18:36.798851 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/2.log" Mar 08 03:18:37.641404 master-0 kubenswrapper[7387]: I0308 03:18:37.641257 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:37.811211 master-0 kubenswrapper[7387]: I0308 03:18:37.811097 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-jnpl5_7af634f0-65ac-402a-acd6-a8aad11b37ab/service-ca-controller/1.log" Mar 08 03:18:38.007822 master-0 kubenswrapper[7387]: I0308 03:18:38.007730 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:38.008210 master-0 kubenswrapper[7387]: I0308 03:18:38.007830 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:38.381075 master-0 kubenswrapper[7387]: I0308 03:18:38.380791 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:38.381075 master-0 kubenswrapper[7387]: I0308 03:18:38.380895 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:38.766806 master-0 kubenswrapper[7387]: E0308 03:18:38.766683 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:41.008309 master-0 kubenswrapper[7387]: I0308 03:18:41.008198 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:41.008309 master-0 kubenswrapper[7387]: I0308 03:18:41.008285 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:41.381891 master-0 kubenswrapper[7387]: I0308 03:18:41.381701 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:41.381891 master-0 kubenswrapper[7387]: I0308 03:18:41.381809 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:41.616345 master-0 kubenswrapper[7387]: I0308 03:18:41.616213 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:41.616345 master-0 kubenswrapper[7387]: I0308 03:18:41.616335 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:45.008193 master-0 kubenswrapper[7387]: I0308 03:18:45.008085 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:45.009158 master-0 kubenswrapper[7387]: I0308 03:18:45.009105 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:45.009366 master-0 kubenswrapper[7387]: I0308 03:18:45.009340 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:18:45.010548 master-0 kubenswrapper[7387]: I0308 03:18:45.010518 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 08 03:18:45.010703 master-0 kubenswrapper[7387]: I0308 03:18:45.010679 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" containerID="cri-o://baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" gracePeriod=30 Mar 08 03:18:45.027139 master-0 kubenswrapper[7387]: I0308 03:18:45.027082 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": read tcp 10.128.0.2:34896->10.128.0.11:8443: read: connection reset by peer" start-of-body= Mar 08 03:18:45.028754 master-0 kubenswrapper[7387]: I0308 03:18:45.028701 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": read tcp 10.128.0.2:34896->10.128.0.11:8443: read: connection reset by peer" Mar 08 03:18:45.029565 master-0 kubenswrapper[7387]: I0308 03:18:45.029494 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:18:45.029695 master-0 kubenswrapper[7387]: I0308 03:18:45.029594 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:18:45.134822 master-0 kubenswrapper[7387]: E0308 03:18:45.134764 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-d4wnv_openshift-config-operator(bd1bcaff-7dbd-4559-92fc-5453993f643e)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" Mar 08 03:18:45.865814 master-0 kubenswrapper[7387]: I0308 03:18:45.865711 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/3.log" Mar 08 03:18:45.866482 master-0 kubenswrapper[7387]: I0308 03:18:45.866434 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/2.log" Mar 08 03:18:45.866876 master-0 kubenswrapper[7387]: I0308 03:18:45.866830 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" exitCode=255 Mar 08 03:18:45.866985 master-0 kubenswrapper[7387]: I0308 03:18:45.866875 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9"} Mar 08 03:18:45.866985 master-0 kubenswrapper[7387]: I0308 03:18:45.866976 7387 scope.go:117] "RemoveContainer" containerID="af9e47bdeb07b2e79a9535acbbaf30eba9c435fc3d8897762bb3fb61a91678ea" Mar 08 03:18:45.867421 master-0 kubenswrapper[7387]: I0308 03:18:45.867378 7387 scope.go:117] "RemoveContainer" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" Mar 08 03:18:45.867608 master-0 kubenswrapper[7387]: E0308 03:18:45.867564 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-d4wnv_openshift-config-operator(bd1bcaff-7dbd-4559-92fc-5453993f643e)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" Mar 08 03:18:46.738725 master-0 kubenswrapper[7387]: I0308 03:18:46.738635 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:46.877208 master-0 kubenswrapper[7387]: I0308 03:18:46.877113 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/3.log" Mar 08 03:18:47.640508 master-0 kubenswrapper[7387]: I0308 03:18:47.640389 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:47.760450 master-0 kubenswrapper[7387]: I0308 03:18:47.760337 7387 scope.go:117] "RemoveContainer" containerID="35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce" Mar 08 03:18:47.761329 master-0 kubenswrapper[7387]: I0308 03:18:47.760508 7387 scope.go:117] "RemoveContainer" containerID="1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203" Mar 08 03:18:47.761329 master-0 kubenswrapper[7387]: I0308 03:18:47.760649 7387 scope.go:117] "RemoveContainer" containerID="5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b" Mar 08 03:18:47.761329 master-0 kubenswrapper[7387]: E0308 03:18:47.760846 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-799b6db4d7-gstfr_openshift-apiserver-operator(2a506cf6-bc39-4089-9caa-4c14c4d15c11)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" podUID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" Mar 08 03:18:47.761658 master-0 kubenswrapper[7387]: E0308 03:18:47.761370 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-h7lpf_openshift-controller-manager-operator(0722d9c3-77b8-4770-9171-d4aeba4b0cc7)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" podUID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" Mar 08 03:18:48.768118 master-0 kubenswrapper[7387]: E0308 03:18:48.768031 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:48.892479 master-0 kubenswrapper[7387]: I0308 03:18:48.892361 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/1.log" Mar 08 03:18:48.892479 master-0 kubenswrapper[7387]: I0308 03:18:48.892468 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerStarted","Data":"c2ca8d040bfba75b786491a7f494a16b01e68ff5762368d65a86118d64a49cb6"} Mar 08 03:18:49.760698 master-0 kubenswrapper[7387]: I0308 03:18:49.760059 7387 scope.go:117] "RemoveContainer" containerID="c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d" Mar 08 03:18:49.761061 master-0 kubenswrapper[7387]: I0308 03:18:49.760782 7387 scope.go:117] "RemoveContainer" containerID="279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c" Mar 08 03:18:49.761061 master-0 kubenswrapper[7387]: I0308 03:18:49.760873 7387 scope.go:117] "RemoveContainer" containerID="7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be" Mar 08 03:18:49.761287 master-0 kubenswrapper[7387]: I0308 03:18:49.761057 7387 scope.go:117] "RemoveContainer" containerID="c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54" Mar 08 03:18:49.761287 master-0 kubenswrapper[7387]: E0308 03:18:49.761250 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-5c74bfc494-rz5c8_openshift-kube-scheduler-operator(89e15db4-c541-4d53-878d-706fa022f970)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" podUID="89e15db4-c541-4d53-878d-706fa022f970" Mar 08 03:18:49.761553 master-0 kubenswrapper[7387]: E0308 03:18:49.761370 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-69b6fc6b88-vjmf6_openshift-service-ca-operator(1fa64f1b-9f10-488b-8f94-1600774062c4)\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" podUID="1fa64f1b-9f10-488b-8f94-1600774062c4" Mar 08 03:18:49.761686 master-0 kubenswrapper[7387]: E0308 03:18:49.761647 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-86d7cdfdfb-xtwpr_openshift-kube-controller-manager-operator(2468d2a3-ec65-4888-a86a-3f66fa311f56)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" podUID="2468d2a3-ec65-4888-a86a-3f66fa311f56" Mar 08 03:18:50.759935 master-0 kubenswrapper[7387]: I0308 03:18:50.759140 7387 scope.go:117] "RemoveContainer" containerID="2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d" Mar 08 03:18:50.759935 master-0 kubenswrapper[7387]: E0308 03:18:50.759381 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=network-operator pod=network-operator-7c649bf6d4-wxrfp_openshift-network-operator(89fc77c9-b444-4828-8a35-c63ea9335245)\"" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" podUID="89fc77c9-b444-4828-8a35-c63ea9335245" Mar 08 03:18:50.760876 master-0 kubenswrapper[7387]: I0308 03:18:50.760680 7387 scope.go:117] "RemoveContainer" containerID="72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758" Mar 08 03:18:50.760876 master-0 kubenswrapper[7387]: E0308 03:18:50.760841 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-zcr8w_openshift-kube-apiserver-operator(5a058138-8039-4841-821b-7ee5bb8648e4)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" podUID="5a058138-8039-4841-821b-7ee5bb8648e4" Mar 08 03:18:50.908922 master-0 kubenswrapper[7387]: I0308 03:18:50.908847 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-jnpl5_7af634f0-65ac-402a-acd6-a8aad11b37ab/service-ca-controller/1.log" Mar 08 03:18:50.909125 master-0 kubenswrapper[7387]: I0308 03:18:50.908973 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerStarted","Data":"4ba849afa6c1096c68700ba2a3716f297bd7a9a7ae2cf94f600da7b5f14c3033"} Mar 08 03:18:51.615748 master-0 kubenswrapper[7387]: I0308 03:18:51.615667 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:18:51.616027 master-0 kubenswrapper[7387]: I0308 03:18:51.615762 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:56.740335 master-0 kubenswrapper[7387]: I0308 03:18:56.740199 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:56.741559 master-0 kubenswrapper[7387]: I0308 03:18:56.740357 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:56.741559 master-0 kubenswrapper[7387]: I0308 03:18:56.741052 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 08 03:18:56.741559 master-0 kubenswrapper[7387]: I0308 03:18:56.741117 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c" gracePeriod=30 Mar 08 03:18:56.955778 master-0 kubenswrapper[7387]: I0308 03:18:56.955511 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c" exitCode=2 Mar 08 03:18:56.955778 master-0 kubenswrapper[7387]: I0308 03:18:56.955582 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c"} Mar 08 03:18:56.955778 master-0 kubenswrapper[7387]: I0308 03:18:56.955631 7387 scope.go:117] "RemoveContainer" containerID="c112ca6cd11ea4c9ce69d6d6d519c8fce15ec706e2d5984472b111b57942340d" Mar 08 03:18:57.640691 master-0 kubenswrapper[7387]: I0308 03:18:57.640560 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:18:57.641690 master-0 kubenswrapper[7387]: I0308 03:18:57.640719 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.966112 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/3.log" Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.966732 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/2.log" Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.966789 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" exitCode=1 Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.966853 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93"} Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.966892 7387 scope.go:117] "RemoveContainer" containerID="c6876a4a4ece00ccff5b60dc8a905f0f7de29a860707746f02e52710809c00e5" Mar 08 03:18:57.967683 master-0 kubenswrapper[7387]: I0308 03:18:57.967638 7387 scope.go:117] "RemoveContainer" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" Mar 08 03:18:57.968861 master-0 kubenswrapper[7387]: E0308 03:18:57.967972 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:18:57.975140 master-0 kubenswrapper[7387]: I0308 03:18:57.975105 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65"} Mar 08 03:18:57.977230 master-0 kubenswrapper[7387]: I0308 03:18:57.977165 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 08 03:18:57.977316 master-0 kubenswrapper[7387]: I0308 03:18:57.977272 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2" gracePeriod=30 Mar 08 03:18:58.759612 master-0 kubenswrapper[7387]: I0308 03:18:58.759506 7387 scope.go:117] "RemoveContainer" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" Mar 08 03:18:58.759973 master-0 kubenswrapper[7387]: E0308 03:18:58.759887 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-d4wnv_openshift-config-operator(bd1bcaff-7dbd-4559-92fc-5453993f643e)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" Mar 08 03:18:58.982145 master-0 kubenswrapper[7387]: I0308 03:18:58.982110 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/3.log" Mar 08 03:19:00.760335 master-0 kubenswrapper[7387]: I0308 03:19:00.760248 7387 scope.go:117] "RemoveContainer" containerID="1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203" Mar 08 03:19:01.000793 master-0 kubenswrapper[7387]: I0308 03:19:01.000720 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/2.log" Mar 08 03:19:01.001192 master-0 kubenswrapper[7387]: I0308 03:19:01.000811 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"62e972b8bed8e15ecb54cf31905c8e961d34ba4506e8988ac047b3329919293e"} Mar 08 03:19:01.615927 master-0 kubenswrapper[7387]: I0308 03:19:01.615853 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:19:01.616190 master-0 kubenswrapper[7387]: I0308 03:19:01.615964 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:19:01.616190 master-0 kubenswrapper[7387]: I0308 03:19:01.615862 7387 patch_prober.go:28] interesting pod/route-controller-manager-8c4996cd4-qsvqj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:19:01.616190 master-0 kubenswrapper[7387]: I0308 03:19:01.616058 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:19:01.760411 master-0 kubenswrapper[7387]: I0308 03:19:01.760356 7387 scope.go:117] "RemoveContainer" containerID="c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d" Mar 08 03:19:02.760308 master-0 kubenswrapper[7387]: I0308 03:19:02.760219 7387 scope.go:117] "RemoveContainer" containerID="2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d" Mar 08 03:19:02.760602 master-0 kubenswrapper[7387]: I0308 03:19:02.760489 7387 scope.go:117] "RemoveContainer" containerID="5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b" Mar 08 03:19:02.761341 master-0 kubenswrapper[7387]: I0308 03:19:02.760812 7387 scope.go:117] "RemoveContainer" containerID="279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c" Mar 08 03:19:03.022831 master-0 kubenswrapper[7387]: I0308 03:19:03.020149 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/2.log" Mar 08 03:19:03.022831 master-0 kubenswrapper[7387]: I0308 03:19:03.020283 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"f750a9def8422866b22d39a2cd3d196c793426a1bcfc147c9836ec1f7382a781"} Mar 08 03:19:03.028390 master-0 kubenswrapper[7387]: I0308 03:19:03.028331 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/2.log" Mar 08 03:19:03.028509 master-0 kubenswrapper[7387]: I0308 03:19:03.028443 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" event={"ID":"0722d9c3-77b8-4770-9171-d4aeba4b0cc7","Type":"ContainerStarted","Data":"c94b73e519e383394ac52486ba137a12e3d62bc0ee65d9b6506885ef4c56113f"} Mar 08 03:19:03.319510 master-0 kubenswrapper[7387]: I0308 03:19:03.319360 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:03.739365 master-0 kubenswrapper[7387]: I0308 03:19:03.739185 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:03.762615 master-0 kubenswrapper[7387]: I0308 03:19:03.762545 7387 scope.go:117] "RemoveContainer" containerID="72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758" Mar 08 03:19:03.965152 master-0 kubenswrapper[7387]: I0308 03:19:03.964575 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:04.037737 master-0 kubenswrapper[7387]: I0308 03:19:04.037699 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/2.log" Mar 08 03:19:04.037891 master-0 kubenswrapper[7387]: I0308 03:19:04.037808 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" event={"ID":"89fc77c9-b444-4828-8a35-c63ea9335245","Type":"ContainerStarted","Data":"58762e55602a1be7a4992471d8fa05f5d35714c62436d860681056d609af0404"} Mar 08 03:19:04.041455 master-0 kubenswrapper[7387]: I0308 03:19:04.041415 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2" exitCode=255 Mar 08 03:19:04.041559 master-0 kubenswrapper[7387]: I0308 03:19:04.041485 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2"} Mar 08 03:19:04.041559 master-0 kubenswrapper[7387]: I0308 03:19:04.041524 7387 scope.go:117] "RemoveContainer" containerID="67a655ba69c1284df3e55d35d8747eb2453fb400eccb0f1604d78be6e1c5d034" Mar 08 03:19:04.043655 master-0 kubenswrapper[7387]: I0308 03:19:04.043632 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/2.log" Mar 08 03:19:04.043736 master-0 kubenswrapper[7387]: I0308 03:19:04.043683 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"00d9ac3c9b6193b454aa568c1a383fab452df49e6573435f6a143be4c2708486"} Mar 08 03:19:04.050310 master-0 kubenswrapper[7387]: I0308 03:19:04.049947 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/1.log" Mar 08 03:19:04.050418 master-0 kubenswrapper[7387]: I0308 03:19:04.050306 7387 generic.go:334] "Generic (PLEG): container finished" podID="e2495994-736c-4916-b210-ff5633f3387d" containerID="d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe" exitCode=255 Mar 08 03:19:04.050418 master-0 kubenswrapper[7387]: I0308 03:19:04.050343 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerDied","Data":"d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe"} Mar 08 03:19:04.050804 master-0 kubenswrapper[7387]: I0308 03:19:04.050770 7387 scope.go:117] "RemoveContainer" containerID="d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe" Mar 08 03:19:04.050986 master-0 kubenswrapper[7387]: E0308 03:19:04.050956 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-8c4996cd4-qsvqj_openshift-route-controller-manager(e2495994-736c-4916-b210-ff5633f3387d)\"" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" Mar 08 03:19:04.073135 master-0 kubenswrapper[7387]: I0308 03:19:04.073097 7387 scope.go:117] "RemoveContainer" containerID="d89cedfa5c6dd99c3607e2b41fd1a5a7721d2add34c9b3bd4ddfc268530aeaaf" Mar 08 03:19:04.759498 master-0 kubenswrapper[7387]: I0308 03:19:04.759426 7387 scope.go:117] "RemoveContainer" containerID="c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54" Mar 08 03:19:05.061220 master-0 kubenswrapper[7387]: I0308 03:19:05.061068 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/1.log" Mar 08 03:19:05.063955 master-0 kubenswrapper[7387]: I0308 03:19:05.063867 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/2.log" Mar 08 03:19:05.064119 master-0 kubenswrapper[7387]: I0308 03:19:05.064067 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"15751ae441f57c6481deb8b5cc3f72916e46489440f9eb8189b8afd0e24064b8"} Mar 08 03:19:05.068082 master-0 kubenswrapper[7387]: I0308 03:19:05.068031 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/2.log" Mar 08 03:19:05.068218 master-0 kubenswrapper[7387]: I0308 03:19:05.068128 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" event={"ID":"1fa64f1b-9f10-488b-8f94-1600774062c4","Type":"ContainerStarted","Data":"0fe962616e00ad1f24a82e68fd64a4be663ed91d0dbf2ea81de6089d37bf0513"} Mar 08 03:19:05.073197 master-0 kubenswrapper[7387]: I0308 03:19:05.073135 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"85ebc2aadcc00fbddf926f6ab17ab8c204935ad575ebd07cf7adcfc06b4a6c08"} Mar 08 03:19:06.449323 master-0 kubenswrapper[7387]: I0308 03:19:06.449242 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:06.845736 master-0 kubenswrapper[7387]: I0308 03:19:06.845666 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:19:07.094304 master-0 kubenswrapper[7387]: I0308 03:19:07.094231 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:10.614560 master-0 kubenswrapper[7387]: I0308 03:19:10.614495 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:19:10.615219 master-0 kubenswrapper[7387]: I0308 03:19:10.614962 7387 scope.go:117] "RemoveContainer" containerID="d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe" Mar 08 03:19:10.615270 master-0 kubenswrapper[7387]: E0308 03:19:10.615213 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-8c4996cd4-qsvqj_openshift-route-controller-manager(e2495994-736c-4916-b210-ff5633f3387d)\"" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" Mar 08 03:19:12.759451 master-0 kubenswrapper[7387]: I0308 03:19:12.759401 7387 scope.go:117] "RemoveContainer" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" Mar 08 03:19:12.760699 master-0 kubenswrapper[7387]: E0308 03:19:12.760623 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:19:13.764031 master-0 kubenswrapper[7387]: I0308 03:19:13.763946 7387 scope.go:117] "RemoveContainer" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" Mar 08 03:19:13.764808 master-0 kubenswrapper[7387]: E0308 03:19:13.764294 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-d4wnv_openshift-config-operator(bd1bcaff-7dbd-4559-92fc-5453993f643e)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" Mar 08 03:19:14.679448 master-0 kubenswrapper[7387]: I0308 03:19:14.679369 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:14.688017 master-0 kubenswrapper[7387]: I0308 03:19:14.687955 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:15.180264 master-0 kubenswrapper[7387]: I0308 03:19:15.180182 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:19:24.760331 master-0 kubenswrapper[7387]: I0308 03:19:24.760245 7387 scope.go:117] "RemoveContainer" containerID="d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe" Mar 08 03:19:25.238919 master-0 kubenswrapper[7387]: I0308 03:19:25.238821 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/1.log" Mar 08 03:19:25.239176 master-0 kubenswrapper[7387]: I0308 03:19:25.239150 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerStarted","Data":"ba06595e6a5f3ba16e78e9f249cd73ba267f2f907f5c29c1de1760f3a56ccdd7"} Mar 08 03:19:25.239635 master-0 kubenswrapper[7387]: I0308 03:19:25.239575 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:19:25.458754 master-0 kubenswrapper[7387]: I0308 03:19:25.458683 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:19:25.759716 master-0 kubenswrapper[7387]: I0308 03:19:25.759527 7387 scope.go:117] "RemoveContainer" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" Mar 08 03:19:25.760090 master-0 kubenswrapper[7387]: E0308 03:19:25.759845 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:19:27.759676 master-0 kubenswrapper[7387]: I0308 03:19:27.759562 7387 scope.go:117] "RemoveContainer" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" Mar 08 03:19:28.265534 master-0 kubenswrapper[7387]: I0308 03:19:28.265457 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/3.log" Mar 08 03:19:28.266184 master-0 kubenswrapper[7387]: I0308 03:19:28.266117 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"3ffe89ef5d1c010872dcc8d98905a0b3c74a65a6e59320222ab4708980d7907c"} Mar 08 03:19:28.266500 master-0 kubenswrapper[7387]: I0308 03:19:28.266471 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:19:32.387351 master-0 kubenswrapper[7387]: I0308 03:19:32.387233 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:19:34.354548 master-0 kubenswrapper[7387]: I0308 03:19:34.354471 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:19:34.355316 master-0 kubenswrapper[7387]: I0308 03:19:34.354813 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bv2v9" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="registry-server" containerID="cri-o://e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825" gracePeriod=2 Mar 08 03:19:34.557356 master-0 kubenswrapper[7387]: I0308 03:19:34.556778 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:19:34.557356 master-0 kubenswrapper[7387]: I0308 03:19:34.557162 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l2dj4" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="registry-server" containerID="cri-o://0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4" gracePeriod=2 Mar 08 03:19:34.763161 master-0 kubenswrapper[7387]: I0308 03:19:34.763068 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-82rfr"] Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: E0308 03:19:34.763217 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763229 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: E0308 03:19:34.763238 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763244 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: E0308 03:19:34.763255 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763262 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: E0308 03:19:34.763282 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763288 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: E0308 03:19:34.763295 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763302 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763372 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763383 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763393 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8c5365-e7a0-4f69-913f-2e12b142e4a5" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763403 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9732f3d-49d0-4400-ab54-ce029c49ec37" containerName="installer" Mar 08 03:19:34.763712 master-0 kubenswrapper[7387]: I0308 03:19:34.763413 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:19:34.769290 master-0 kubenswrapper[7387]: I0308 03:19:34.763994 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.769290 master-0 kubenswrapper[7387]: I0308 03:19:34.766102 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-fm6df" Mar 08 03:19:34.790599 master-0 kubenswrapper[7387]: I0308 03:19:34.790537 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82rfr"] Mar 08 03:19:34.816045 master-0 kubenswrapper[7387]: I0308 03:19:34.815624 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:19:34.843175 master-0 kubenswrapper[7387]: I0308 03:19:34.843134 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.843671 master-0 kubenswrapper[7387]: I0308 03:19:34.843647 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.843863 master-0 kubenswrapper[7387]: I0308 03:19:34.843834 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6wb\" (UniqueName: \"kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.945428 master-0 kubenswrapper[7387]: I0308 03:19:34.944953 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities\") pod \"10895809-a444-42ec-a41f-111e17f6beb3\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " Mar 08 03:19:34.945428 master-0 kubenswrapper[7387]: I0308 03:19:34.945036 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content\") pod \"10895809-a444-42ec-a41f-111e17f6beb3\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " Mar 08 03:19:34.945428 master-0 kubenswrapper[7387]: I0308 03:19:34.945149 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8889r\" (UniqueName: \"kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r\") pod \"10895809-a444-42ec-a41f-111e17f6beb3\" (UID: \"10895809-a444-42ec-a41f-111e17f6beb3\") " Mar 08 03:19:34.945428 master-0 kubenswrapper[7387]: I0308 03:19:34.945376 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.945428 master-0 kubenswrapper[7387]: I0308 03:19:34.945417 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.948128 master-0 kubenswrapper[7387]: I0308 03:19:34.945465 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6wb\" (UniqueName: \"kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.949866 master-0 kubenswrapper[7387]: I0308 03:19:34.948946 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.949866 master-0 kubenswrapper[7387]: I0308 03:19:34.949077 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.949866 master-0 kubenswrapper[7387]: I0308 03:19:34.949797 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities" (OuterVolumeSpecName: "utilities") pod "10895809-a444-42ec-a41f-111e17f6beb3" (UID: "10895809-a444-42ec-a41f-111e17f6beb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:34.954225 master-0 kubenswrapper[7387]: I0308 03:19:34.954128 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r" (OuterVolumeSpecName: "kube-api-access-8889r") pod "10895809-a444-42ec-a41f-111e17f6beb3" (UID: "10895809-a444-42ec-a41f-111e17f6beb3"). InnerVolumeSpecName "kube-api-access-8889r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:19:34.964473 master-0 kubenswrapper[7387]: I0308 03:19:34.964443 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:19:34.967886 master-0 kubenswrapper[7387]: I0308 03:19:34.967836 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r97mb"] Mar 08 03:19:34.968058 master-0 kubenswrapper[7387]: E0308 03:19:34.968030 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="extract-content" Mar 08 03:19:34.968058 master-0 kubenswrapper[7387]: I0308 03:19:34.968047 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="extract-content" Mar 08 03:19:34.968058 master-0 kubenswrapper[7387]: E0308 03:19:34.968059 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="registry-server" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: I0308 03:19:34.968069 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="registry-server" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: E0308 03:19:34.968083 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="extract-utilities" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: I0308 03:19:34.968090 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="extract-utilities" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: E0308 03:19:34.968098 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="registry-server" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: I0308 03:19:34.968104 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="registry-server" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: E0308 03:19:34.968116 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="extract-content" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: I0308 03:19:34.968122 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="extract-content" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: E0308 03:19:34.968132 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="extract-utilities" Mar 08 03:19:34.968194 master-0 kubenswrapper[7387]: I0308 03:19:34.968138 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="extract-utilities" Mar 08 03:19:34.968546 master-0 kubenswrapper[7387]: I0308 03:19:34.968218 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="10895809-a444-42ec-a41f-111e17f6beb3" containerName="registry-server" Mar 08 03:19:34.968546 master-0 kubenswrapper[7387]: I0308 03:19:34.968229 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerName="registry-server" Mar 08 03:19:34.968984 master-0 kubenswrapper[7387]: I0308 03:19:34.968954 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:34.972681 master-0 kubenswrapper[7387]: I0308 03:19:34.972631 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mw5z6" Mar 08 03:19:34.975803 master-0 kubenswrapper[7387]: I0308 03:19:34.975761 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6wb\" (UniqueName: \"kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:34.991878 master-0 kubenswrapper[7387]: I0308 03:19:34.991811 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r97mb"] Mar 08 03:19:35.033707 master-0 kubenswrapper[7387]: I0308 03:19:35.033649 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10895809-a444-42ec-a41f-111e17f6beb3" (UID: "10895809-a444-42ec-a41f-111e17f6beb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:35.046318 master-0 kubenswrapper[7387]: I0308 03:19:35.046270 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8889r\" (UniqueName: \"kubernetes.io/projected/10895809-a444-42ec-a41f-111e17f6beb3-kube-api-access-8889r\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.046318 master-0 kubenswrapper[7387]: I0308 03:19:35.046316 7387 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.046449 master-0 kubenswrapper[7387]: I0308 03:19:35.046334 7387 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10895809-a444-42ec-a41f-111e17f6beb3-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.136940 master-0 kubenswrapper[7387]: I0308 03:19:35.136749 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:35.147381 master-0 kubenswrapper[7387]: I0308 03:19:35.147327 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whz5v\" (UniqueName: \"kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v\") pod \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " Mar 08 03:19:35.147515 master-0 kubenswrapper[7387]: I0308 03:19:35.147475 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content\") pod \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " Mar 08 03:19:35.147694 master-0 kubenswrapper[7387]: I0308 03:19:35.147637 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities\") pod \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\" (UID: \"7afe61b3-1460-48ed-9369-4d9893d2f4f4\") " Mar 08 03:19:35.148041 master-0 kubenswrapper[7387]: I0308 03:19:35.148008 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.148527 master-0 kubenswrapper[7387]: I0308 03:19:35.148057 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qkq\" (UniqueName: \"kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.148527 master-0 kubenswrapper[7387]: I0308 03:19:35.148265 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.149431 master-0 kubenswrapper[7387]: I0308 03:19:35.149354 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities" (OuterVolumeSpecName: "utilities") pod "7afe61b3-1460-48ed-9369-4d9893d2f4f4" (UID: "7afe61b3-1460-48ed-9369-4d9893d2f4f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:35.153759 master-0 kubenswrapper[7387]: I0308 03:19:35.153687 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v" (OuterVolumeSpecName: "kube-api-access-whz5v") pod "7afe61b3-1460-48ed-9369-4d9893d2f4f4" (UID: "7afe61b3-1460-48ed-9369-4d9893d2f4f4"). InnerVolumeSpecName "kube-api-access-whz5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:19:35.250032 master-0 kubenswrapper[7387]: I0308 03:19:35.249959 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.250141 master-0 kubenswrapper[7387]: I0308 03:19:35.250056 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qkq\" (UniqueName: \"kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.250192 master-0 kubenswrapper[7387]: I0308 03:19:35.250141 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.250285 master-0 kubenswrapper[7387]: I0308 03:19:35.250224 7387 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.250285 master-0 kubenswrapper[7387]: I0308 03:19:35.250262 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whz5v\" (UniqueName: \"kubernetes.io/projected/7afe61b3-1460-48ed-9369-4d9893d2f4f4-kube-api-access-whz5v\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.251111 master-0 kubenswrapper[7387]: I0308 03:19:35.251023 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.251111 master-0 kubenswrapper[7387]: I0308 03:19:35.251075 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.255834 master-0 kubenswrapper[7387]: I0308 03:19:35.255780 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7afe61b3-1460-48ed-9369-4d9893d2f4f4" (UID: "7afe61b3-1460-48ed-9369-4d9893d2f4f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:35.280537 master-0 kubenswrapper[7387]: I0308 03:19:35.280474 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qkq\" (UniqueName: \"kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.308766 master-0 kubenswrapper[7387]: I0308 03:19:35.308686 7387 generic.go:334] "Generic (PLEG): container finished" podID="10895809-a444-42ec-a41f-111e17f6beb3" containerID="e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825" exitCode=0 Mar 08 03:19:35.309077 master-0 kubenswrapper[7387]: I0308 03:19:35.308779 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerDied","Data":"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825"} Mar 08 03:19:35.309077 master-0 kubenswrapper[7387]: I0308 03:19:35.308820 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv2v9" event={"ID":"10895809-a444-42ec-a41f-111e17f6beb3","Type":"ContainerDied","Data":"eb6a0fa697f07bd8b4258d861bc42d4dd0bded85d64bcf04e5a347df7ac607d8"} Mar 08 03:19:35.309077 master-0 kubenswrapper[7387]: I0308 03:19:35.308848 7387 scope.go:117] "RemoveContainer" containerID="e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825" Mar 08 03:19:35.309077 master-0 kubenswrapper[7387]: I0308 03:19:35.309058 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv2v9" Mar 08 03:19:35.318363 master-0 kubenswrapper[7387]: I0308 03:19:35.318241 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:35.321825 master-0 kubenswrapper[7387]: I0308 03:19:35.321780 7387 generic.go:334] "Generic (PLEG): container finished" podID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" containerID="0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4" exitCode=0 Mar 08 03:19:35.321825 master-0 kubenswrapper[7387]: I0308 03:19:35.321831 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerDied","Data":"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4"} Mar 08 03:19:35.322057 master-0 kubenswrapper[7387]: I0308 03:19:35.321862 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l2dj4" event={"ID":"7afe61b3-1460-48ed-9369-4d9893d2f4f4","Type":"ContainerDied","Data":"bf1527e18b5a86e91a809b4f5d095a7a82806a089dab98ff084c268db6ce9db6"} Mar 08 03:19:35.322057 master-0 kubenswrapper[7387]: I0308 03:19:35.321973 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l2dj4" Mar 08 03:19:35.353378 master-0 kubenswrapper[7387]: I0308 03:19:35.353338 7387 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7afe61b3-1460-48ed-9369-4d9893d2f4f4-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:35.354312 master-0 kubenswrapper[7387]: I0308 03:19:35.354283 7387 scope.go:117] "RemoveContainer" containerID="13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2" Mar 08 03:19:35.377719 master-0 kubenswrapper[7387]: I0308 03:19:35.377206 7387 scope.go:117] "RemoveContainer" containerID="559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265" Mar 08 03:19:35.390013 master-0 kubenswrapper[7387]: I0308 03:19:35.389960 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:19:35.395294 master-0 kubenswrapper[7387]: I0308 03:19:35.395255 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bv2v9"] Mar 08 03:19:35.404252 master-0 kubenswrapper[7387]: I0308 03:19:35.404135 7387 scope.go:117] "RemoveContainer" containerID="e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825" Mar 08 03:19:35.404833 master-0 kubenswrapper[7387]: E0308 03:19:35.404653 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825\": container with ID starting with e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825 not found: ID does not exist" containerID="e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825" Mar 08 03:19:35.404833 master-0 kubenswrapper[7387]: I0308 03:19:35.404695 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825"} err="failed to get container status \"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825\": rpc error: code = NotFound desc = could not find container \"e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825\": container with ID starting with e758edc1c00f8607b75cd7d8c61fb0e8adc03a2c4a9c4602da27bdc698b41825 not found: ID does not exist" Mar 08 03:19:35.404833 master-0 kubenswrapper[7387]: I0308 03:19:35.404725 7387 scope.go:117] "RemoveContainer" containerID="13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2" Mar 08 03:19:35.405235 master-0 kubenswrapper[7387]: E0308 03:19:35.405154 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2\": container with ID starting with 13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2 not found: ID does not exist" containerID="13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2" Mar 08 03:19:35.405235 master-0 kubenswrapper[7387]: I0308 03:19:35.405204 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2"} err="failed to get container status \"13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2\": rpc error: code = NotFound desc = could not find container \"13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2\": container with ID starting with 13d72c72eed7a2191668b1f63c791fd4e7b201f6fbf35bfc7b80bd017cbf36a2 not found: ID does not exist" Mar 08 03:19:35.405413 master-0 kubenswrapper[7387]: I0308 03:19:35.405251 7387 scope.go:117] "RemoveContainer" containerID="559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265" Mar 08 03:19:35.405650 master-0 kubenswrapper[7387]: E0308 03:19:35.405610 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265\": container with ID starting with 559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265 not found: ID does not exist" containerID="559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265" Mar 08 03:19:35.405700 master-0 kubenswrapper[7387]: I0308 03:19:35.405643 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265"} err="failed to get container status \"559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265\": rpc error: code = NotFound desc = could not find container \"559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265\": container with ID starting with 559d8b447fe4fe44a5f74b23da4a499d9622c9c3bfb406e230a12898e8f8e265 not found: ID does not exist" Mar 08 03:19:35.405700 master-0 kubenswrapper[7387]: I0308 03:19:35.405663 7387 scope.go:117] "RemoveContainer" containerID="0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4" Mar 08 03:19:35.421568 master-0 kubenswrapper[7387]: I0308 03:19:35.421474 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:19:35.425288 master-0 kubenswrapper[7387]: I0308 03:19:35.425209 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l2dj4"] Mar 08 03:19:35.438588 master-0 kubenswrapper[7387]: I0308 03:19:35.438539 7387 scope.go:117] "RemoveContainer" containerID="298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03" Mar 08 03:19:35.462060 master-0 kubenswrapper[7387]: I0308 03:19:35.461947 7387 scope.go:117] "RemoveContainer" containerID="7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153" Mar 08 03:19:35.484862 master-0 kubenswrapper[7387]: I0308 03:19:35.484791 7387 scope.go:117] "RemoveContainer" containerID="0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4" Mar 08 03:19:35.485827 master-0 kubenswrapper[7387]: E0308 03:19:35.485735 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4\": container with ID starting with 0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4 not found: ID does not exist" containerID="0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4" Mar 08 03:19:35.485956 master-0 kubenswrapper[7387]: I0308 03:19:35.485829 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4"} err="failed to get container status \"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4\": rpc error: code = NotFound desc = could not find container \"0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4\": container with ID starting with 0151a35bf1531a9a80bb3a7b0b88eb0c4b3a925a22fdf316ade1341157c849b4 not found: ID does not exist" Mar 08 03:19:35.486011 master-0 kubenswrapper[7387]: I0308 03:19:35.485966 7387 scope.go:117] "RemoveContainer" containerID="298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03" Mar 08 03:19:35.487086 master-0 kubenswrapper[7387]: E0308 03:19:35.487041 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03\": container with ID starting with 298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03 not found: ID does not exist" containerID="298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03" Mar 08 03:19:35.487146 master-0 kubenswrapper[7387]: I0308 03:19:35.487083 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03"} err="failed to get container status \"298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03\": rpc error: code = NotFound desc = could not find container \"298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03\": container with ID starting with 298d71a4dfadc213d01cb4cfba24e2539fd0a6e29419f1b99aebae7144764a03 not found: ID does not exist" Mar 08 03:19:35.487146 master-0 kubenswrapper[7387]: I0308 03:19:35.487132 7387 scope.go:117] "RemoveContainer" containerID="7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153" Mar 08 03:19:35.487636 master-0 kubenswrapper[7387]: E0308 03:19:35.487573 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153\": container with ID starting with 7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153 not found: ID does not exist" containerID="7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153" Mar 08 03:19:35.487699 master-0 kubenswrapper[7387]: I0308 03:19:35.487649 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153"} err="failed to get container status \"7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153\": rpc error: code = NotFound desc = could not find container \"7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153\": container with ID starting with 7b8261d3a814a19ee13244fd0a49a5dfe2c752dcb7e2705662730f4226213153 not found: ID does not exist" Mar 08 03:19:35.636079 master-0 kubenswrapper[7387]: I0308 03:19:35.635653 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r97mb"] Mar 08 03:19:35.648997 master-0 kubenswrapper[7387]: I0308 03:19:35.648947 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-82rfr"] Mar 08 03:19:35.653320 master-0 kubenswrapper[7387]: W0308 03:19:35.653285 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd90b06_2733_4086_8d70_b9aed3f7c5fa.slice/crio-08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924 WatchSource:0}: Error finding container 08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924: Status 404 returned error can't find the container with id 08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924 Mar 08 03:19:35.659208 master-0 kubenswrapper[7387]: W0308 03:19:35.659159 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea474cd1_8693_4505_9d6f_863d78776d11.slice/crio-7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326 WatchSource:0}: Error finding container 7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326: Status 404 returned error can't find the container with id 7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326 Mar 08 03:19:35.771468 master-0 kubenswrapper[7387]: I0308 03:19:35.771398 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10895809-a444-42ec-a41f-111e17f6beb3" path="/var/lib/kubelet/pods/10895809-a444-42ec-a41f-111e17f6beb3/volumes" Mar 08 03:19:35.772737 master-0 kubenswrapper[7387]: I0308 03:19:35.772683 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afe61b3-1460-48ed-9369-4d9893d2f4f4" path="/var/lib/kubelet/pods/7afe61b3-1460-48ed-9369-4d9893d2f4f4/volumes" Mar 08 03:19:36.332292 master-0 kubenswrapper[7387]: I0308 03:19:36.332203 7387 generic.go:334] "Generic (PLEG): container finished" podID="efd90b06-2733-4086-8d70-b9aed3f7c5fa" containerID="4cff0cf9994171cd26e2dfc788853d1edc3f7d516e075c54ccc4de66155800df" exitCode=0 Mar 08 03:19:36.332584 master-0 kubenswrapper[7387]: I0308 03:19:36.332340 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97mb" event={"ID":"efd90b06-2733-4086-8d70-b9aed3f7c5fa","Type":"ContainerDied","Data":"4cff0cf9994171cd26e2dfc788853d1edc3f7d516e075c54ccc4de66155800df"} Mar 08 03:19:36.332584 master-0 kubenswrapper[7387]: I0308 03:19:36.332445 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97mb" event={"ID":"efd90b06-2733-4086-8d70-b9aed3f7c5fa","Type":"ContainerStarted","Data":"08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924"} Mar 08 03:19:36.335876 master-0 kubenswrapper[7387]: I0308 03:19:36.335818 7387 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:19:36.349309 master-0 kubenswrapper[7387]: I0308 03:19:36.349252 7387 generic.go:334] "Generic (PLEG): container finished" podID="ea474cd1-8693-4505-9d6f-863d78776d11" containerID="24d2da5eecbea2601256f35d2117582419f13128e199a2ef407b84deab351231" exitCode=0 Mar 08 03:19:36.349309 master-0 kubenswrapper[7387]: I0308 03:19:36.349317 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82rfr" event={"ID":"ea474cd1-8693-4505-9d6f-863d78776d11","Type":"ContainerDied","Data":"24d2da5eecbea2601256f35d2117582419f13128e199a2ef407b84deab351231"} Mar 08 03:19:36.349611 master-0 kubenswrapper[7387]: I0308 03:19:36.349353 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82rfr" event={"ID":"ea474cd1-8693-4505-9d6f-863d78776d11","Type":"ContainerStarted","Data":"7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326"} Mar 08 03:19:37.163625 master-0 kubenswrapper[7387]: I0308 03:19:37.163558 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:19:37.164400 master-0 kubenswrapper[7387]: I0308 03:19:37.163959 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qwkmn" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="registry-server" containerID="cri-o://7d1e117d0ec451a4b1cba8ab16163f6c71cff1fb505fc4820a69f5c053ccc5d7" gracePeriod=2 Mar 08 03:19:37.361845 master-0 kubenswrapper[7387]: I0308 03:19:37.361256 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:19:37.362605 master-0 kubenswrapper[7387]: I0308 03:19:37.362065 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82rfr" event={"ID":"ea474cd1-8693-4505-9d6f-863d78776d11","Type":"ContainerStarted","Data":"f6fa734f9f31ac07e6ddecdab50d459bed27799d7ebf08ef0257f97b10bcd874"} Mar 08 03:19:37.364860 master-0 kubenswrapper[7387]: I0308 03:19:37.364809 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97mb" event={"ID":"efd90b06-2733-4086-8d70-b9aed3f7c5fa","Type":"ContainerStarted","Data":"1e4f4d94c09667f06d80074811ef12370da17593d72be45cabbce6af91fa585e"} Mar 08 03:19:37.368178 master-0 kubenswrapper[7387]: I0308 03:19:37.368123 7387 generic.go:334] "Generic (PLEG): container finished" podID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerID="7d1e117d0ec451a4b1cba8ab16163f6c71cff1fb505fc4820a69f5c053ccc5d7" exitCode=0 Mar 08 03:19:37.368430 master-0 kubenswrapper[7387]: I0308 03:19:37.368207 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerDied","Data":"7d1e117d0ec451a4b1cba8ab16163f6c71cff1fb505fc4820a69f5c053ccc5d7"} Mar 08 03:19:37.369092 master-0 kubenswrapper[7387]: I0308 03:19:37.368995 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ljh97" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="registry-server" containerID="cri-o://353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04" gracePeriod=2 Mar 08 03:19:37.570378 master-0 kubenswrapper[7387]: I0308 03:19:37.568641 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k6hg9"] Mar 08 03:19:37.570378 master-0 kubenswrapper[7387]: I0308 03:19:37.570060 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.582429 master-0 kubenswrapper[7387]: I0308 03:19:37.573380 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-d6gwq" Mar 08 03:19:37.601273 master-0 kubenswrapper[7387]: I0308 03:19:37.601170 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.601372 master-0 kubenswrapper[7387]: I0308 03:19:37.601304 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.601372 master-0 kubenswrapper[7387]: I0308 03:19:37.601329 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjkw\" (UniqueName: \"kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.626707 master-0 kubenswrapper[7387]: I0308 03:19:37.626660 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6hg9"] Mar 08 03:19:37.697456 master-0 kubenswrapper[7387]: I0308 03:19:37.696659 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.702684 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content\") pod \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.702725 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities\") pod \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.702747 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnzvf\" (UniqueName: \"kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf\") pod \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\" (UID: \"3a9142af-1b48-49b1-8e0f-53e8494d5e01\") " Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.702923 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.703171 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.703234 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjkw\" (UniqueName: \"kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.703564 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.703588 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.705993 master-0 kubenswrapper[7387]: I0308 03:19:37.703754 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities" (OuterVolumeSpecName: "utilities") pod "3a9142af-1b48-49b1-8e0f-53e8494d5e01" (UID: "3a9142af-1b48-49b1-8e0f-53e8494d5e01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:37.711205 master-0 kubenswrapper[7387]: I0308 03:19:37.711157 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf" (OuterVolumeSpecName: "kube-api-access-vnzvf") pod "3a9142af-1b48-49b1-8e0f-53e8494d5e01" (UID: "3a9142af-1b48-49b1-8e0f-53e8494d5e01"). InnerVolumeSpecName "kube-api-access-vnzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:19:37.747510 master-0 kubenswrapper[7387]: I0308 03:19:37.747386 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjkw\" (UniqueName: \"kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:37.749649 master-0 kubenswrapper[7387]: I0308 03:19:37.749585 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a9142af-1b48-49b1-8e0f-53e8494d5e01" (UID: "3a9142af-1b48-49b1-8e0f-53e8494d5e01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:37.759768 master-0 kubenswrapper[7387]: I0308 03:19:37.759734 7387 scope.go:117] "RemoveContainer" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775386 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775388 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4h9n9"] Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775707 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="extract-utilities" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775719 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="extract-utilities" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775735 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="extract-content" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775742 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="extract-content" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775751 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="extract-content" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775758 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="extract-content" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775768 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="extract-utilities" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775774 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="extract-utilities" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775782 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775788 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: E0308 03:19:37.775798 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775804 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775877 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.775895 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" containerName="registry-server" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.776460 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.790432 master-0 kubenswrapper[7387]: I0308 03:19:37.783950 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-9gswq" Mar 08 03:19:37.795934 master-0 kubenswrapper[7387]: I0308 03:19:37.792952 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4h9n9"] Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.804651 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cctj6\" (UniqueName: \"kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6\") pod \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.804758 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities\") pod \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805267 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content\") pod \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\" (UID: \"4df5a48e-425c-443e-bfdf-6d57fe1e4638\") " Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805456 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805550 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805578 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttwx8\" (UniqueName: \"kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805607 7387 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805617 7387 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9142af-1b48-49b1-8e0f-53e8494d5e01-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805626 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnzvf\" (UniqueName: \"kubernetes.io/projected/3a9142af-1b48-49b1-8e0f-53e8494d5e01-kube-api-access-vnzvf\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:37.807260 master-0 kubenswrapper[7387]: I0308 03:19:37.805655 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities" (OuterVolumeSpecName: "utilities") pod "4df5a48e-425c-443e-bfdf-6d57fe1e4638" (UID: "4df5a48e-425c-443e-bfdf-6d57fe1e4638"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:37.811605 master-0 kubenswrapper[7387]: I0308 03:19:37.808967 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6" (OuterVolumeSpecName: "kube-api-access-cctj6") pod "4df5a48e-425c-443e-bfdf-6d57fe1e4638" (UID: "4df5a48e-425c-443e-bfdf-6d57fe1e4638"). InnerVolumeSpecName "kube-api-access-cctj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:19:37.906664 master-0 kubenswrapper[7387]: I0308 03:19:37.906620 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.906664 master-0 kubenswrapper[7387]: I0308 03:19:37.906665 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttwx8\" (UniqueName: \"kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.906891 master-0 kubenswrapper[7387]: I0308 03:19:37.906864 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.906974 master-0 kubenswrapper[7387]: I0308 03:19:37.906960 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cctj6\" (UniqueName: \"kubernetes.io/projected/4df5a48e-425c-443e-bfdf-6d57fe1e4638-kube-api-access-cctj6\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:37.907017 master-0 kubenswrapper[7387]: I0308 03:19:37.906976 7387 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-utilities\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:37.907444 master-0 kubenswrapper[7387]: I0308 03:19:37.907410 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.907789 master-0 kubenswrapper[7387]: I0308 03:19:37.907561 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.939724 master-0 kubenswrapper[7387]: I0308 03:19:37.939647 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttwx8\" (UniqueName: \"kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:37.949309 master-0 kubenswrapper[7387]: I0308 03:19:37.949233 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4df5a48e-425c-443e-bfdf-6d57fe1e4638" (UID: "4df5a48e-425c-443e-bfdf-6d57fe1e4638"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:19:38.007958 master-0 kubenswrapper[7387]: I0308 03:19:38.007874 7387 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4df5a48e-425c-443e-bfdf-6d57fe1e4638-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 08 03:19:38.043982 master-0 kubenswrapper[7387]: I0308 03:19:38.043893 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:38.112549 master-0 kubenswrapper[7387]: I0308 03:19:38.112468 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:38.310012 master-0 kubenswrapper[7387]: I0308 03:19:38.309941 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k6hg9"] Mar 08 03:19:38.323993 master-0 kubenswrapper[7387]: W0308 03:19:38.320380 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32a3f04f_05ea_4ee3_ac77_da375c39d104.slice/crio-3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595 WatchSource:0}: Error finding container 3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595: Status 404 returned error can't find the container with id 3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595 Mar 08 03:19:38.376649 master-0 kubenswrapper[7387]: I0308 03:19:38.376607 7387 generic.go:334] "Generic (PLEG): container finished" podID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" containerID="353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04" exitCode=0 Mar 08 03:19:38.376999 master-0 kubenswrapper[7387]: I0308 03:19:38.376969 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerDied","Data":"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04"} Mar 08 03:19:38.377144 master-0 kubenswrapper[7387]: I0308 03:19:38.377124 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljh97" event={"ID":"4df5a48e-425c-443e-bfdf-6d57fe1e4638","Type":"ContainerDied","Data":"d3f47f44b3c84618239ebe3bfe7bf4d1b33e913e345dd91f4e5f2389d83afc0e"} Mar 08 03:19:38.377416 master-0 kubenswrapper[7387]: I0308 03:19:38.377392 7387 scope.go:117] "RemoveContainer" containerID="353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04" Mar 08 03:19:38.377666 master-0 kubenswrapper[7387]: I0308 03:19:38.377643 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljh97" Mar 08 03:19:38.385742 master-0 kubenswrapper[7387]: I0308 03:19:38.385677 7387 generic.go:334] "Generic (PLEG): container finished" podID="efd90b06-2733-4086-8d70-b9aed3f7c5fa" containerID="1e4f4d94c09667f06d80074811ef12370da17593d72be45cabbce6af91fa585e" exitCode=0 Mar 08 03:19:38.385894 master-0 kubenswrapper[7387]: I0308 03:19:38.385768 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97mb" event={"ID":"efd90b06-2733-4086-8d70-b9aed3f7c5fa","Type":"ContainerDied","Data":"1e4f4d94c09667f06d80074811ef12370da17593d72be45cabbce6af91fa585e"} Mar 08 03:19:38.391969 master-0 kubenswrapper[7387]: I0308 03:19:38.391930 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/3.log" Mar 08 03:19:38.392109 master-0 kubenswrapper[7387]: I0308 03:19:38.392044 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3"} Mar 08 03:19:38.394211 master-0 kubenswrapper[7387]: I0308 03:19:38.394154 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6hg9" event={"ID":"32a3f04f-05ea-4ee3-ac77-da375c39d104","Type":"ContainerStarted","Data":"3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595"} Mar 08 03:19:38.397083 master-0 kubenswrapper[7387]: I0308 03:19:38.397035 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qwkmn" event={"ID":"3a9142af-1b48-49b1-8e0f-53e8494d5e01","Type":"ContainerDied","Data":"8caa1b5d7d43482e6821d9a8a466129706ff3cba15e380b7649182b138c2cbdd"} Mar 08 03:19:38.397222 master-0 kubenswrapper[7387]: I0308 03:19:38.397176 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qwkmn" Mar 08 03:19:38.402171 master-0 kubenswrapper[7387]: I0308 03:19:38.401257 7387 generic.go:334] "Generic (PLEG): container finished" podID="ea474cd1-8693-4505-9d6f-863d78776d11" containerID="f6fa734f9f31ac07e6ddecdab50d459bed27799d7ebf08ef0257f97b10bcd874" exitCode=0 Mar 08 03:19:38.402171 master-0 kubenswrapper[7387]: I0308 03:19:38.401315 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82rfr" event={"ID":"ea474cd1-8693-4505-9d6f-863d78776d11","Type":"ContainerDied","Data":"f6fa734f9f31ac07e6ddecdab50d459bed27799d7ebf08ef0257f97b10bcd874"} Mar 08 03:19:38.415501 master-0 kubenswrapper[7387]: I0308 03:19:38.415457 7387 scope.go:117] "RemoveContainer" containerID="df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70" Mar 08 03:19:38.471265 master-0 kubenswrapper[7387]: I0308 03:19:38.471236 7387 scope.go:117] "RemoveContainer" containerID="b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7" Mar 08 03:19:38.503389 master-0 kubenswrapper[7387]: I0308 03:19:38.502537 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:19:38.503389 master-0 kubenswrapper[7387]: I0308 03:19:38.502606 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qwkmn"] Mar 08 03:19:38.545595 master-0 kubenswrapper[7387]: I0308 03:19:38.545550 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:19:38.547381 master-0 kubenswrapper[7387]: I0308 03:19:38.547310 7387 scope.go:117] "RemoveContainer" containerID="353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04" Mar 08 03:19:38.548087 master-0 kubenswrapper[7387]: E0308 03:19:38.548024 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04\": container with ID starting with 353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04 not found: ID does not exist" containerID="353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04" Mar 08 03:19:38.548158 master-0 kubenswrapper[7387]: I0308 03:19:38.548101 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04"} err="failed to get container status \"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04\": rpc error: code = NotFound desc = could not find container \"353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04\": container with ID starting with 353c345c655000efd9e6c93a30cd6a8ddd92de7374194a0c17aaea9333dfdb04 not found: ID does not exist" Mar 08 03:19:38.548158 master-0 kubenswrapper[7387]: I0308 03:19:38.548131 7387 scope.go:117] "RemoveContainer" containerID="df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70" Mar 08 03:19:38.548837 master-0 kubenswrapper[7387]: E0308 03:19:38.548713 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70\": container with ID starting with df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70 not found: ID does not exist" containerID="df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70" Mar 08 03:19:38.548837 master-0 kubenswrapper[7387]: I0308 03:19:38.548800 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70"} err="failed to get container status \"df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70\": rpc error: code = NotFound desc = could not find container \"df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70\": container with ID starting with df3248903529862f585f7855a96e007f302ab5cb04ea2ed322080f24a1d8ae70 not found: ID does not exist" Mar 08 03:19:38.548837 master-0 kubenswrapper[7387]: I0308 03:19:38.548847 7387 scope.go:117] "RemoveContainer" containerID="b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7" Mar 08 03:19:38.549276 master-0 kubenswrapper[7387]: E0308 03:19:38.549181 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7\": container with ID starting with b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7 not found: ID does not exist" containerID="b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7" Mar 08 03:19:38.549276 master-0 kubenswrapper[7387]: I0308 03:19:38.549210 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7"} err="failed to get container status \"b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7\": rpc error: code = NotFound desc = could not find container \"b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7\": container with ID starting with b86a5645dd90e6bc4e4dac5621a83f7adc3f8163dd7f706bfc47510c9199c2b7 not found: ID does not exist" Mar 08 03:19:38.549276 master-0 kubenswrapper[7387]: I0308 03:19:38.549248 7387 scope.go:117] "RemoveContainer" containerID="7d1e117d0ec451a4b1cba8ab16163f6c71cff1fb505fc4820a69f5c053ccc5d7" Mar 08 03:19:38.550408 master-0 kubenswrapper[7387]: I0308 03:19:38.550312 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ljh97"] Mar 08 03:19:38.564196 master-0 kubenswrapper[7387]: I0308 03:19:38.564149 7387 scope.go:117] "RemoveContainer" containerID="4d94dea428b3bf85791a0b8f028285c48bd5213ac70429f60380a516057a75ed" Mar 08 03:19:38.586800 master-0 kubenswrapper[7387]: I0308 03:19:38.586454 7387 scope.go:117] "RemoveContainer" containerID="4864123e35280779c7eb88b414c99a6dc86b1ee4312ab819168cc4c3fb25d713" Mar 08 03:19:38.608721 master-0 kubenswrapper[7387]: I0308 03:19:38.608666 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4h9n9"] Mar 08 03:19:38.670105 master-0 kubenswrapper[7387]: W0308 03:19:38.669987 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ee54a2_5967_4da7_940c_5200d7df098d.slice/crio-7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20 WatchSource:0}: Error finding container 7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20: Status 404 returned error can't find the container with id 7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20 Mar 08 03:19:39.415929 master-0 kubenswrapper[7387]: I0308 03:19:39.412140 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r97mb" event={"ID":"efd90b06-2733-4086-8d70-b9aed3f7c5fa","Type":"ContainerStarted","Data":"aa35305cd234500a8f54fd00ebc33dcaeecb693a03f91f0d0145774486195e4b"} Mar 08 03:19:39.417532 master-0 kubenswrapper[7387]: I0308 03:19:39.416595 7387 generic.go:334] "Generic (PLEG): container finished" podID="32a3f04f-05ea-4ee3-ac77-da375c39d104" containerID="2a16a4af1391388c9f3a8456384c6ebc73646aae055d7d3ffb5f00616c4c0d45" exitCode=0 Mar 08 03:19:39.417532 master-0 kubenswrapper[7387]: I0308 03:19:39.416670 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6hg9" event={"ID":"32a3f04f-05ea-4ee3-ac77-da375c39d104","Type":"ContainerDied","Data":"2a16a4af1391388c9f3a8456384c6ebc73646aae055d7d3ffb5f00616c4c0d45"} Mar 08 03:19:39.424649 master-0 kubenswrapper[7387]: I0308 03:19:39.424585 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-82rfr" event={"ID":"ea474cd1-8693-4505-9d6f-863d78776d11","Type":"ContainerStarted","Data":"2ca4a8b5dbaf646cf201c46eeb55dabfa8bacfbc558b6b96d35eb2bbb34bbd2d"} Mar 08 03:19:39.444863 master-0 kubenswrapper[7387]: I0308 03:19:39.444809 7387 generic.go:334] "Generic (PLEG): container finished" podID="82ee54a2-5967-4da7-940c-5200d7df098d" containerID="9c94e7958c020b301758cb42ae87ec2c374c361307485925c4fcc17c93742009" exitCode=0 Mar 08 03:19:39.445038 master-0 kubenswrapper[7387]: I0308 03:19:39.444864 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h9n9" event={"ID":"82ee54a2-5967-4da7-940c-5200d7df098d","Type":"ContainerDied","Data":"9c94e7958c020b301758cb42ae87ec2c374c361307485925c4fcc17c93742009"} Mar 08 03:19:39.445038 master-0 kubenswrapper[7387]: I0308 03:19:39.444893 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h9n9" event={"ID":"82ee54a2-5967-4da7-940c-5200d7df098d","Type":"ContainerStarted","Data":"7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20"} Mar 08 03:19:39.457567 master-0 kubenswrapper[7387]: I0308 03:19:39.457470 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r97mb" podStartSLOduration=2.971827194 podStartE2EDuration="5.457445839s" podCreationTimestamp="2026-03-08 03:19:34 +0000 UTC" firstStartedPulling="2026-03-08 03:19:36.335723561 +0000 UTC m=+512.730199272" lastFinishedPulling="2026-03-08 03:19:38.821342226 +0000 UTC m=+515.215817917" observedRunningTime="2026-03-08 03:19:39.45674819 +0000 UTC m=+515.851223891" watchObservedRunningTime="2026-03-08 03:19:39.457445839 +0000 UTC m=+515.851921560" Mar 08 03:19:39.547501 master-0 kubenswrapper[7387]: I0308 03:19:39.547429 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-82rfr" podStartSLOduration=2.992871781 podStartE2EDuration="5.547410378s" podCreationTimestamp="2026-03-08 03:19:34 +0000 UTC" firstStartedPulling="2026-03-08 03:19:36.351660003 +0000 UTC m=+512.746135714" lastFinishedPulling="2026-03-08 03:19:38.90619862 +0000 UTC m=+515.300674311" observedRunningTime="2026-03-08 03:19:39.543418763 +0000 UTC m=+515.937894444" watchObservedRunningTime="2026-03-08 03:19:39.547410378 +0000 UTC m=+515.941886059" Mar 08 03:19:39.766496 master-0 kubenswrapper[7387]: I0308 03:19:39.766451 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9142af-1b48-49b1-8e0f-53e8494d5e01" path="/var/lib/kubelet/pods/3a9142af-1b48-49b1-8e0f-53e8494d5e01/volumes" Mar 08 03:19:39.767093 master-0 kubenswrapper[7387]: I0308 03:19:39.767069 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4df5a48e-425c-443e-bfdf-6d57fe1e4638" path="/var/lib/kubelet/pods/4df5a48e-425c-443e-bfdf-6d57fe1e4638/volumes" Mar 08 03:19:40.452205 master-0 kubenswrapper[7387]: I0308 03:19:40.452141 7387 generic.go:334] "Generic (PLEG): container finished" podID="32a3f04f-05ea-4ee3-ac77-da375c39d104" containerID="d95366bbb45d1486da1389f6482624ab19b4c42be8cafcec08506d4ffd00d1c1" exitCode=0 Mar 08 03:19:40.452656 master-0 kubenswrapper[7387]: I0308 03:19:40.452249 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6hg9" event={"ID":"32a3f04f-05ea-4ee3-ac77-da375c39d104","Type":"ContainerDied","Data":"d95366bbb45d1486da1389f6482624ab19b4c42be8cafcec08506d4ffd00d1c1"} Mar 08 03:19:40.454529 master-0 kubenswrapper[7387]: I0308 03:19:40.454476 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h9n9" event={"ID":"82ee54a2-5967-4da7-940c-5200d7df098d","Type":"ContainerStarted","Data":"56b45cbe22a9ea31f9701b6616f25027fe9ee05239d29ec96e9726f45861602c"} Mar 08 03:19:41.463588 master-0 kubenswrapper[7387]: I0308 03:19:41.463520 7387 generic.go:334] "Generic (PLEG): container finished" podID="82ee54a2-5967-4da7-940c-5200d7df098d" containerID="56b45cbe22a9ea31f9701b6616f25027fe9ee05239d29ec96e9726f45861602c" exitCode=0 Mar 08 03:19:41.464179 master-0 kubenswrapper[7387]: I0308 03:19:41.463651 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h9n9" event={"ID":"82ee54a2-5967-4da7-940c-5200d7df098d","Type":"ContainerDied","Data":"56b45cbe22a9ea31f9701b6616f25027fe9ee05239d29ec96e9726f45861602c"} Mar 08 03:19:41.466746 master-0 kubenswrapper[7387]: I0308 03:19:41.466709 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k6hg9" event={"ID":"32a3f04f-05ea-4ee3-ac77-da375c39d104","Type":"ContainerStarted","Data":"fb66365f9246550e640b1e45298369de67ea8dac915bfd3bb741b1c575558376"} Mar 08 03:19:41.517682 master-0 kubenswrapper[7387]: I0308 03:19:41.517561 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k6hg9" podStartSLOduration=2.979736572 podStartE2EDuration="4.51754017s" podCreationTimestamp="2026-03-08 03:19:37 +0000 UTC" firstStartedPulling="2026-03-08 03:19:39.4185368 +0000 UTC m=+515.813012511" lastFinishedPulling="2026-03-08 03:19:40.956340418 +0000 UTC m=+517.350816109" observedRunningTime="2026-03-08 03:19:41.517076708 +0000 UTC m=+517.911552439" watchObservedRunningTime="2026-03-08 03:19:41.51754017 +0000 UTC m=+517.912015861" Mar 08 03:19:42.477706 master-0 kubenswrapper[7387]: I0308 03:19:42.477633 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4h9n9" event={"ID":"82ee54a2-5967-4da7-940c-5200d7df098d","Type":"ContainerStarted","Data":"660c3ea4dd1c4be6b76ff6dd73e8b47c87a55a766b71e69489df65a18da4b9a8"} Mar 08 03:19:45.137896 master-0 kubenswrapper[7387]: I0308 03:19:45.137823 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:45.138666 master-0 kubenswrapper[7387]: I0308 03:19:45.137939 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:45.192335 master-0 kubenswrapper[7387]: I0308 03:19:45.192277 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:45.225592 master-0 kubenswrapper[7387]: I0308 03:19:45.225434 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4h9n9" podStartSLOduration=5.774987005 podStartE2EDuration="8.225404659s" podCreationTimestamp="2026-03-08 03:19:37 +0000 UTC" firstStartedPulling="2026-03-08 03:19:39.446263933 +0000 UTC m=+515.840739604" lastFinishedPulling="2026-03-08 03:19:41.896681537 +0000 UTC m=+518.291157258" observedRunningTime="2026-03-08 03:19:42.502368465 +0000 UTC m=+518.896844156" watchObservedRunningTime="2026-03-08 03:19:45.225404659 +0000 UTC m=+521.619880380" Mar 08 03:19:45.318631 master-0 kubenswrapper[7387]: I0308 03:19:45.318544 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:45.318877 master-0 kubenswrapper[7387]: I0308 03:19:45.318647 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:45.382443 master-0 kubenswrapper[7387]: I0308 03:19:45.382390 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:45.570947 master-0 kubenswrapper[7387]: I0308 03:19:45.570878 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:19:45.574165 master-0 kubenswrapper[7387]: I0308 03:19:45.574117 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:19:48.044236 master-0 kubenswrapper[7387]: I0308 03:19:48.044148 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:48.044236 master-0 kubenswrapper[7387]: I0308 03:19:48.044226 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:48.107024 master-0 kubenswrapper[7387]: I0308 03:19:48.106960 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:48.113105 master-0 kubenswrapper[7387]: I0308 03:19:48.113061 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:48.113438 master-0 kubenswrapper[7387]: I0308 03:19:48.113357 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:48.595399 master-0 kubenswrapper[7387]: I0308 03:19:48.595348 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:19:49.176384 master-0 kubenswrapper[7387]: I0308 03:19:49.176313 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4h9n9" podUID="82ee54a2-5967-4da7-940c-5200d7df098d" containerName="registry-server" probeResult="failure" output=< Mar 08 03:19:49.176384 master-0 kubenswrapper[7387]: timeout: failed to connect service ":50051" within 1s Mar 08 03:19:49.176384 master-0 kubenswrapper[7387]: > Mar 08 03:19:58.174517 master-0 kubenswrapper[7387]: I0308 03:19:58.174436 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:19:58.243674 master-0 kubenswrapper[7387]: I0308 03:19:58.243614 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:21:14.106374 master-0 kubenswrapper[7387]: I0308 03:21:14.106295 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/1.log" Mar 08 03:21:14.108064 master-0 kubenswrapper[7387]: I0308 03:21:14.107997 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/0.log" Mar 08 03:21:14.108232 master-0 kubenswrapper[7387]: I0308 03:21:14.108095 7387 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda" exitCode=1 Mar 08 03:21:14.108232 master-0 kubenswrapper[7387]: I0308 03:21:14.108154 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerDied","Data":"84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda"} Mar 08 03:21:14.108232 master-0 kubenswrapper[7387]: I0308 03:21:14.108211 7387 scope.go:117] "RemoveContainer" containerID="11de5739554b7c94cfe0fa61f3b1195f2e9f62f484bc837ca53fa9727626c6dd" Mar 08 03:21:14.109099 master-0 kubenswrapper[7387]: I0308 03:21:14.109029 7387 scope.go:117] "RemoveContainer" containerID="84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda" Mar 08 03:21:14.109646 master-0 kubenswrapper[7387]: E0308 03:21:14.109536 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:21:15.118954 master-0 kubenswrapper[7387]: I0308 03:21:15.118832 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/1.log" Mar 08 03:21:25.760764 master-0 kubenswrapper[7387]: I0308 03:21:25.760680 7387 scope.go:117] "RemoveContainer" containerID="84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda" Mar 08 03:21:26.192768 master-0 kubenswrapper[7387]: I0308 03:21:26.192595 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/1.log" Mar 08 03:21:26.193404 master-0 kubenswrapper[7387]: I0308 03:21:26.193330 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886"} Mar 08 03:23:27.028732 master-0 kubenswrapper[7387]: I0308 03:23:27.028591 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/2.log" Mar 08 03:23:27.031570 master-0 kubenswrapper[7387]: I0308 03:23:27.031518 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/1.log" Mar 08 03:23:27.032687 master-0 kubenswrapper[7387]: I0308 03:23:27.032560 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerDied","Data":"1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886"} Mar 08 03:23:27.032887 master-0 kubenswrapper[7387]: I0308 03:23:27.032415 7387 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886" exitCode=1 Mar 08 03:23:27.032887 master-0 kubenswrapper[7387]: I0308 03:23:27.032803 7387 scope.go:117] "RemoveContainer" containerID="84c99d58596591f517162ce0801066c3386afbe465547d2042ee596ce9855fda" Mar 08 03:23:27.035338 master-0 kubenswrapper[7387]: I0308 03:23:27.035147 7387 scope.go:117] "RemoveContainer" containerID="1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886" Mar 08 03:23:27.035741 master-0 kubenswrapper[7387]: E0308 03:23:27.035686 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:23:28.044179 master-0 kubenswrapper[7387]: I0308 03:23:28.044101 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/2.log" Mar 08 03:23:40.759561 master-0 kubenswrapper[7387]: I0308 03:23:40.759484 7387 scope.go:117] "RemoveContainer" containerID="1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886" Mar 08 03:23:40.760512 master-0 kubenswrapper[7387]: E0308 03:23:40.759735 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:23:47.464289 master-0 kubenswrapper[7387]: I0308 03:23:47.464206 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww"] Mar 08 03:23:47.466405 master-0 kubenswrapper[7387]: I0308 03:23:47.465502 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:47.467834 master-0 kubenswrapper[7387]: I0308 03:23:47.467790 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-7hbhc" Mar 08 03:23:47.469031 master-0 kubenswrapper[7387]: I0308 03:23:47.468973 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 03:23:47.469444 master-0 kubenswrapper[7387]: I0308 03:23:47.469405 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 03:23:47.469444 master-0 kubenswrapper[7387]: I0308 03:23:47.469434 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 03:23:47.527077 master-0 kubenswrapper[7387]: I0308 03:23:47.486994 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww"] Mar 08 03:23:47.572050 master-0 kubenswrapper[7387]: I0308 03:23:47.571665 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mbg2\" (UniqueName: \"kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:47.572050 master-0 kubenswrapper[7387]: I0308 03:23:47.571932 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:47.673859 master-0 kubenswrapper[7387]: I0308 03:23:47.673790 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbg2\" (UniqueName: \"kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:47.674165 master-0 kubenswrapper[7387]: I0308 03:23:47.673959 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:47.674165 master-0 kubenswrapper[7387]: E0308 03:23:47.674146 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:47.674308 master-0 kubenswrapper[7387]: E0308 03:23:47.674223 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:48.174199625 +0000 UTC m=+764.568675346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:47.702100 master-0 kubenswrapper[7387]: I0308 03:23:47.702009 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbg2\" (UniqueName: \"kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:48.182209 master-0 kubenswrapper[7387]: I0308 03:23:48.182125 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:48.182631 master-0 kubenswrapper[7387]: E0308 03:23:48.182385 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:48.182631 master-0 kubenswrapper[7387]: E0308 03:23:48.182501 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:49.182471843 +0000 UTC m=+765.576947554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:49.196990 master-0 kubenswrapper[7387]: I0308 03:23:49.196854 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:49.197753 master-0 kubenswrapper[7387]: E0308 03:23:49.197107 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:49.197753 master-0 kubenswrapper[7387]: E0308 03:23:49.197203 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:51.197176039 +0000 UTC m=+767.591651760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:51.104879 master-0 kubenswrapper[7387]: I0308 03:23:51.104806 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx"] Mar 08 03:23:51.105965 master-0 kubenswrapper[7387]: I0308 03:23:51.105749 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.109140 master-0 kubenswrapper[7387]: I0308 03:23:51.109081 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 03:23:51.109566 master-0 kubenswrapper[7387]: I0308 03:23:51.109502 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 03:23:51.109703 master-0 kubenswrapper[7387]: I0308 03:23:51.109675 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 03:23:51.109812 master-0 kubenswrapper[7387]: I0308 03:23:51.109732 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 03:23:51.110199 master-0 kubenswrapper[7387]: I0308 03:23:51.110117 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vbs7r" Mar 08 03:23:51.110355 master-0 kubenswrapper[7387]: I0308 03:23:51.110233 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 03:23:51.124622 master-0 kubenswrapper[7387]: I0308 03:23:51.124559 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.124756 master-0 kubenswrapper[7387]: I0308 03:23:51.124644 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.124814 master-0 kubenswrapper[7387]: I0308 03:23:51.124749 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxhl\" (UniqueName: \"kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.125213 master-0 kubenswrapper[7387]: I0308 03:23:51.125148 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.226654 master-0 kubenswrapper[7387]: I0308 03:23:51.226548 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.226861 master-0 kubenswrapper[7387]: I0308 03:23:51.226702 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.226861 master-0 kubenswrapper[7387]: I0308 03:23:51.226753 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.226861 master-0 kubenswrapper[7387]: E0308 03:23:51.226774 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:23:51.227053 master-0 kubenswrapper[7387]: E0308 03:23:51.226865 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:23:51.726841408 +0000 UTC m=+768.121317089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:23:51.227053 master-0 kubenswrapper[7387]: I0308 03:23:51.226974 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxhl\" (UniqueName: \"kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.227144 master-0 kubenswrapper[7387]: I0308 03:23:51.227058 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:51.227260 master-0 kubenswrapper[7387]: E0308 03:23:51.227228 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:51.227323 master-0 kubenswrapper[7387]: E0308 03:23:51.227305 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:55.227281309 +0000 UTC m=+771.621757020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:51.227747 master-0 kubenswrapper[7387]: I0308 03:23:51.227696 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.227796 master-0 kubenswrapper[7387]: I0308 03:23:51.227711 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.247183 master-0 kubenswrapper[7387]: I0308 03:23:51.247130 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxhl\" (UniqueName: \"kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.734651 master-0 kubenswrapper[7387]: I0308 03:23:51.734560 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:51.734878 master-0 kubenswrapper[7387]: E0308 03:23:51.734814 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:23:51.734972 master-0 kubenswrapper[7387]: E0308 03:23:51.734942 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:23:52.734878049 +0000 UTC m=+769.129353770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:23:52.750131 master-0 kubenswrapper[7387]: I0308 03:23:52.750054 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:52.751162 master-0 kubenswrapper[7387]: E0308 03:23:52.750279 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:23:52.751162 master-0 kubenswrapper[7387]: E0308 03:23:52.750377 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:23:54.750354846 +0000 UTC m=+771.144830537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:23:53.446376 master-0 kubenswrapper[7387]: I0308 03:23:53.446300 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss"] Mar 08 03:23:53.447283 master-0 kubenswrapper[7387]: I0308 03:23:53.447223 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.453374 master-0 kubenswrapper[7387]: I0308 03:23:53.453263 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-ftthh" Mar 08 03:23:53.453374 master-0 kubenswrapper[7387]: I0308 03:23:53.453328 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 03:23:53.453374 master-0 kubenswrapper[7387]: I0308 03:23:53.453365 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 03:23:53.453616 master-0 kubenswrapper[7387]: I0308 03:23:53.453431 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 03:23:53.453616 master-0 kubenswrapper[7387]: I0308 03:23:53.453340 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 03:23:53.465395 master-0 kubenswrapper[7387]: I0308 03:23:53.465357 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss"] Mar 08 03:23:53.559464 master-0 kubenswrapper[7387]: I0308 03:23:53.559389 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.559464 master-0 kubenswrapper[7387]: I0308 03:23:53.559456 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.559760 master-0 kubenswrapper[7387]: I0308 03:23:53.559502 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gcw\" (UniqueName: \"kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.660766 master-0 kubenswrapper[7387]: I0308 03:23:53.660679 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.660766 master-0 kubenswrapper[7387]: I0308 03:23:53.660770 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.661171 master-0 kubenswrapper[7387]: E0308 03:23:53.660976 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:53.661171 master-0 kubenswrapper[7387]: E0308 03:23:53.661096 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:54.161063248 +0000 UTC m=+770.555538969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:53.661171 master-0 kubenswrapper[7387]: I0308 03:23:53.661144 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gcw\" (UniqueName: \"kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.662205 master-0 kubenswrapper[7387]: I0308 03:23:53.662160 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:53.682990 master-0 kubenswrapper[7387]: I0308 03:23:53.682875 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gcw\" (UniqueName: \"kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:54.174892 master-0 kubenswrapper[7387]: I0308 03:23:54.174789 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:54.175739 master-0 kubenswrapper[7387]: E0308 03:23:54.175080 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:54.175739 master-0 kubenswrapper[7387]: E0308 03:23:54.175178 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:55.175149957 +0000 UTC m=+771.569625668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:54.734654 master-0 kubenswrapper[7387]: I0308 03:23:54.734593 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844"] Mar 08 03:23:54.737450 master-0 kubenswrapper[7387]: I0308 03:23:54.737409 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:54.741211 master-0 kubenswrapper[7387]: I0308 03:23:54.741162 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 03:23:54.741211 master-0 kubenswrapper[7387]: I0308 03:23:54.741176 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-bhtmv" Mar 08 03:23:54.741431 master-0 kubenswrapper[7387]: I0308 03:23:54.741344 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 03:23:54.741534 master-0 kubenswrapper[7387]: I0308 03:23:54.741172 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 03:23:54.757979 master-0 kubenswrapper[7387]: I0308 03:23:54.755590 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844"] Mar 08 03:23:54.784563 master-0 kubenswrapper[7387]: I0308 03:23:54.784496 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:54.784755 master-0 kubenswrapper[7387]: I0308 03:23:54.784573 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:54.784755 master-0 kubenswrapper[7387]: I0308 03:23:54.784656 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28tv\" (UniqueName: \"kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:54.784895 master-0 kubenswrapper[7387]: E0308 03:23:54.784748 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:23:54.784895 master-0 kubenswrapper[7387]: E0308 03:23:54.784850 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:23:58.78482061 +0000 UTC m=+775.179296321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:23:54.886581 master-0 kubenswrapper[7387]: I0308 03:23:54.886458 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:54.887085 master-0 kubenswrapper[7387]: I0308 03:23:54.886589 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28tv\" (UniqueName: \"kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:54.887085 master-0 kubenswrapper[7387]: E0308 03:23:54.886669 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:23:54.887085 master-0 kubenswrapper[7387]: E0308 03:23:54.886772 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:55.386749991 +0000 UTC m=+771.781225682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:23:54.919958 master-0 kubenswrapper[7387]: I0308 03:23:54.919872 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28tv\" (UniqueName: \"kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:55.191666 master-0 kubenswrapper[7387]: I0308 03:23:55.191568 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:55.192437 master-0 kubenswrapper[7387]: E0308 03:23:55.191836 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:55.192437 master-0 kubenswrapper[7387]: E0308 03:23:55.192025 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:57.191988128 +0000 UTC m=+773.586463849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:55.293334 master-0 kubenswrapper[7387]: I0308 03:23:55.293242 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:23:55.293628 master-0 kubenswrapper[7387]: E0308 03:23:55.293433 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:55.293628 master-0 kubenswrapper[7387]: E0308 03:23:55.293553 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:03.293522129 +0000 UTC m=+779.687997870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:23:55.394823 master-0 kubenswrapper[7387]: I0308 03:23:55.394741 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:55.395261 master-0 kubenswrapper[7387]: E0308 03:23:55.394998 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:23:55.395380 master-0 kubenswrapper[7387]: E0308 03:23:55.395290 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:56.395271014 +0000 UTC m=+772.789746695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:23:55.456532 master-0 kubenswrapper[7387]: I0308 03:23:55.456416 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b"] Mar 08 03:23:55.457791 master-0 kubenswrapper[7387]: I0308 03:23:55.457770 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.460318 master-0 kubenswrapper[7387]: I0308 03:23:55.460258 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dqqnp" Mar 08 03:23:55.460445 master-0 kubenswrapper[7387]: I0308 03:23:55.460424 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 03:23:55.460572 master-0 kubenswrapper[7387]: I0308 03:23:55.460529 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 03:23:55.460781 master-0 kubenswrapper[7387]: I0308 03:23:55.460733 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 03:23:55.470515 master-0 kubenswrapper[7387]: I0308 03:23:55.470302 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 03:23:55.476279 master-0 kubenswrapper[7387]: I0308 03:23:55.476238 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b"] Mar 08 03:23:55.497801 master-0 kubenswrapper[7387]: I0308 03:23:55.497755 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.498087 master-0 kubenswrapper[7387]: I0308 03:23:55.498065 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.498366 master-0 kubenswrapper[7387]: I0308 03:23:55.498320 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.498447 master-0 kubenswrapper[7387]: I0308 03:23:55.498393 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knc57\" (UniqueName: \"kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.498769 master-0 kubenswrapper[7387]: I0308 03:23:55.498678 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.600139 master-0 kubenswrapper[7387]: I0308 03:23:55.600043 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.600139 master-0 kubenswrapper[7387]: I0308 03:23:55.600100 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc57\" (UniqueName: \"kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.600139 master-0 kubenswrapper[7387]: I0308 03:23:55.600150 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.600895 master-0 kubenswrapper[7387]: I0308 03:23:55.600838 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.601256 master-0 kubenswrapper[7387]: I0308 03:23:55.601215 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.601548 master-0 kubenswrapper[7387]: I0308 03:23:55.601080 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.601660 master-0 kubenswrapper[7387]: I0308 03:23:55.601612 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.606090 master-0 kubenswrapper[7387]: I0308 03:23:55.606041 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.606556 master-0 kubenswrapper[7387]: I0308 03:23:55.606496 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.627545 master-0 kubenswrapper[7387]: I0308 03:23:55.627495 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc57\" (UniqueName: \"kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:55.760264 master-0 kubenswrapper[7387]: I0308 03:23:55.760201 7387 scope.go:117] "RemoveContainer" containerID="1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886" Mar 08 03:23:55.804988 master-0 kubenswrapper[7387]: I0308 03:23:55.804345 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:23:56.246527 master-0 kubenswrapper[7387]: I0308 03:23:56.246252 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/2.log" Mar 08 03:23:56.247496 master-0 kubenswrapper[7387]: I0308 03:23:56.247370 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3"} Mar 08 03:23:56.314362 master-0 kubenswrapper[7387]: I0308 03:23:56.314185 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl"] Mar 08 03:23:56.315816 master-0 kubenswrapper[7387]: I0308 03:23:56.315752 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.320043 master-0 kubenswrapper[7387]: I0308 03:23:56.318140 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p5nps" Mar 08 03:23:56.320043 master-0 kubenswrapper[7387]: I0308 03:23:56.318140 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 03:23:56.322561 master-0 kubenswrapper[7387]: I0308 03:23:56.322003 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 03:23:56.343696 master-0 kubenswrapper[7387]: I0308 03:23:56.343627 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl"] Mar 08 03:23:56.375208 master-0 kubenswrapper[7387]: I0308 03:23:56.375162 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b"] Mar 08 03:23:56.421230 master-0 kubenswrapper[7387]: I0308 03:23:56.421131 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f42fg\" (UniqueName: \"kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.421230 master-0 kubenswrapper[7387]: I0308 03:23:56.421233 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.421541 master-0 kubenswrapper[7387]: I0308 03:23:56.421362 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.421541 master-0 kubenswrapper[7387]: I0308 03:23:56.421431 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:56.421655 master-0 kubenswrapper[7387]: E0308 03:23:56.421568 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:23:56.421715 master-0 kubenswrapper[7387]: E0308 03:23:56.421653 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:58.421627345 +0000 UTC m=+774.816103056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:23:56.522855 master-0 kubenswrapper[7387]: I0308 03:23:56.522788 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.523205 master-0 kubenswrapper[7387]: E0308 03:23:56.523132 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:56.523274 master-0 kubenswrapper[7387]: I0308 03:23:56.523201 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42fg\" (UniqueName: \"kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.523346 master-0 kubenswrapper[7387]: E0308 03:23:56.523312 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:23:57.023261378 +0000 UTC m=+773.417737279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:56.523440 master-0 kubenswrapper[7387]: I0308 03:23:56.523389 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.525256 master-0 kubenswrapper[7387]: I0308 03:23:56.525197 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.546106 master-0 kubenswrapper[7387]: I0308 03:23:56.546048 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42fg\" (UniqueName: \"kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:56.970123 master-0 kubenswrapper[7387]: I0308 03:23:56.966801 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4"] Mar 08 03:23:56.970123 master-0 kubenswrapper[7387]: I0308 03:23:56.969871 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:56.975753 master-0 kubenswrapper[7387]: I0308 03:23:56.975673 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkrb" Mar 08 03:23:56.976239 master-0 kubenswrapper[7387]: I0308 03:23:56.976177 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 03:23:56.993973 master-0 kubenswrapper[7387]: I0308 03:23:56.992516 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4"] Mar 08 03:23:57.032751 master-0 kubenswrapper[7387]: I0308 03:23:57.032689 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.032751 master-0 kubenswrapper[7387]: I0308 03:23:57.032747 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:57.033047 master-0 kubenswrapper[7387]: I0308 03:23:57.032823 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzs5\" (UniqueName: \"kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.033047 master-0 kubenswrapper[7387]: E0308 03:23:57.033020 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:57.033133 master-0 kubenswrapper[7387]: E0308 03:23:57.033066 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:23:58.033050245 +0000 UTC m=+774.427525926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:57.053977 master-0 kubenswrapper[7387]: I0308 03:23:57.053834 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-9l8dc"] Mar 08 03:23:57.055471 master-0 kubenswrapper[7387]: I0308 03:23:57.055446 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.062572 master-0 kubenswrapper[7387]: I0308 03:23:57.062521 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 03:23:57.062846 master-0 kubenswrapper[7387]: I0308 03:23:57.062815 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 03:23:57.062963 master-0 kubenswrapper[7387]: I0308 03:23:57.062936 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 03:23:57.063139 master-0 kubenswrapper[7387]: I0308 03:23:57.063115 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 03:23:57.063218 master-0 kubenswrapper[7387]: I0308 03:23:57.062529 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 03:23:57.066979 master-0 kubenswrapper[7387]: I0308 03:23:57.062849 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rgflg" Mar 08 03:23:57.070953 master-0 kubenswrapper[7387]: I0308 03:23:57.069045 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-9l8dc"] Mar 08 03:23:57.135065 master-0 kubenswrapper[7387]: I0308 03:23:57.135020 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.136118 master-0 kubenswrapper[7387]: I0308 03:23:57.135520 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.136435 master-0 kubenswrapper[7387]: I0308 03:23:57.136413 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.136623 master-0 kubenswrapper[7387]: I0308 03:23:57.136605 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzs5\" (UniqueName: \"kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.136785 master-0 kubenswrapper[7387]: I0308 03:23:57.136767 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmdmd\" (UniqueName: \"kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.137496 master-0 kubenswrapper[7387]: I0308 03:23:57.137476 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.137713 master-0 kubenswrapper[7387]: I0308 03:23:57.137669 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.144630 master-0 kubenswrapper[7387]: I0308 03:23:57.144573 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.158354 master-0 kubenswrapper[7387]: I0308 03:23:57.158296 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzs5\" (UniqueName: \"kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.240813 master-0 kubenswrapper[7387]: I0308 03:23:57.240561 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmdmd\" (UniqueName: \"kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.241425 master-0 kubenswrapper[7387]: I0308 03:23:57.241334 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.242536 master-0 kubenswrapper[7387]: I0308 03:23:57.242035 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:23:57.242536 master-0 kubenswrapper[7387]: E0308 03:23:57.242158 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:57.242536 master-0 kubenswrapper[7387]: I0308 03:23:57.242229 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.242536 master-0 kubenswrapper[7387]: E0308 03:23:57.242237 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:01.242215135 +0000 UTC m=+777.636690826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:23:57.242536 master-0 kubenswrapper[7387]: I0308 03:23:57.242493 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.242840 master-0 kubenswrapper[7387]: I0308 03:23:57.242618 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.243687 master-0 kubenswrapper[7387]: I0308 03:23:57.242972 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.243687 master-0 kubenswrapper[7387]: I0308 03:23:57.243637 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.244391 master-0 kubenswrapper[7387]: I0308 03:23:57.244346 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.256007 master-0 kubenswrapper[7387]: I0308 03:23:57.246519 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.256007 master-0 kubenswrapper[7387]: I0308 03:23:57.255819 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerStarted","Data":"296c48bf2ce9de06a78dcb57c1cdbe34ecc220f6b65f5aa0b90cfb68a9d30264"} Mar 08 03:23:57.260062 master-0 kubenswrapper[7387]: I0308 03:23:57.258777 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmdmd\" (UniqueName: \"kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.302674 master-0 kubenswrapper[7387]: I0308 03:23:57.302586 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:23:57.401257 master-0 kubenswrapper[7387]: I0308 03:23:57.401031 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:23:57.677447 master-0 kubenswrapper[7387]: I0308 03:23:57.677367 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt"] Mar 08 03:23:57.678576 master-0 kubenswrapper[7387]: I0308 03:23:57.678495 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.682060 master-0 kubenswrapper[7387]: I0308 03:23:57.682024 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 03:23:57.682060 master-0 kubenswrapper[7387]: I0308 03:23:57.682041 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 03:23:57.682490 master-0 kubenswrapper[7387]: I0308 03:23:57.682139 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 03:23:57.682941 master-0 kubenswrapper[7387]: I0308 03:23:57.682884 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-gqqgx" Mar 08 03:23:57.683311 master-0 kubenswrapper[7387]: I0308 03:23:57.683276 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 03:23:57.683925 master-0 kubenswrapper[7387]: I0308 03:23:57.683539 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 03:23:57.697242 master-0 kubenswrapper[7387]: I0308 03:23:57.692559 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt"] Mar 08 03:23:57.699224 master-0 kubenswrapper[7387]: I0308 03:23:57.699173 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4"] Mar 08 03:23:57.751719 master-0 kubenswrapper[7387]: I0308 03:23:57.751580 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxhht\" (UniqueName: \"kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.751719 master-0 kubenswrapper[7387]: I0308 03:23:57.751729 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.753289 master-0 kubenswrapper[7387]: I0308 03:23:57.751873 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.753289 master-0 kubenswrapper[7387]: I0308 03:23:57.751982 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.814413 master-0 kubenswrapper[7387]: I0308 03:23:57.814286 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-9l8dc"] Mar 08 03:23:57.854418 master-0 kubenswrapper[7387]: I0308 03:23:57.854346 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxhht\" (UniqueName: \"kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.854822 master-0 kubenswrapper[7387]: I0308 03:23:57.854769 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.855205 master-0 kubenswrapper[7387]: I0308 03:23:57.855057 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.855205 master-0 kubenswrapper[7387]: I0308 03:23:57.855143 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.857596 master-0 kubenswrapper[7387]: I0308 03:23:57.857437 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.857703 master-0 kubenswrapper[7387]: I0308 03:23:57.857621 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.859497 master-0 kubenswrapper[7387]: I0308 03:23:57.859472 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:57.875230 master-0 kubenswrapper[7387]: I0308 03:23:57.875180 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxhht\" (UniqueName: \"kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:58.013443 master-0 kubenswrapper[7387]: I0308 03:23:58.013346 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:23:58.058229 master-0 kubenswrapper[7387]: I0308 03:23:58.058173 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:23:58.058444 master-0 kubenswrapper[7387]: E0308 03:23:58.058415 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:58.058503 master-0 kubenswrapper[7387]: E0308 03:23:58.058489 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:24:00.058468211 +0000 UTC m=+776.452943902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:23:58.358827 master-0 kubenswrapper[7387]: W0308 03:23:58.358767 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod965f8eef_c5af_499b_b1db_cf63072781cc.slice/crio-31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e WatchSource:0}: Error finding container 31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e: Status 404 returned error can't find the container with id 31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e Mar 08 03:23:58.463939 master-0 kubenswrapper[7387]: I0308 03:23:58.463106 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:23:58.463939 master-0 kubenswrapper[7387]: E0308 03:23:58.463395 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:23:58.463939 master-0 kubenswrapper[7387]: E0308 03:23:58.463511 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:02.463482042 +0000 UTC m=+778.857957723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:23:58.643823 master-0 kubenswrapper[7387]: I0308 03:23:58.642836 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc"] Mar 08 03:23:58.644487 master-0 kubenswrapper[7387]: I0308 03:23:58.644460 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.649055 master-0 kubenswrapper[7387]: I0308 03:23:58.648221 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 03:23:58.649055 master-0 kubenswrapper[7387]: I0308 03:23:58.648329 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 03:23:58.649055 master-0 kubenswrapper[7387]: I0308 03:23:58.648529 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 03:23:58.649055 master-0 kubenswrapper[7387]: I0308 03:23:58.648858 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-qnnnr" Mar 08 03:23:58.649055 master-0 kubenswrapper[7387]: I0308 03:23:58.648862 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:23:58.649434 master-0 kubenswrapper[7387]: I0308 03:23:58.649329 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:23:58.768148 master-0 kubenswrapper[7387]: I0308 03:23:58.768011 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.768363 master-0 kubenswrapper[7387]: I0308 03:23:58.768157 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrjx\" (UniqueName: \"kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.768363 master-0 kubenswrapper[7387]: I0308 03:23:58.768197 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.768363 master-0 kubenswrapper[7387]: I0308 03:23:58.768222 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.768363 master-0 kubenswrapper[7387]: I0308 03:23:58.768320 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.812417 master-0 kubenswrapper[7387]: I0308 03:23:58.812369 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt"] Mar 08 03:23:58.818168 master-0 kubenswrapper[7387]: W0308 03:23:58.818100 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81abc17a_8a51_44e2_a5df_5ddb394a9fa6.slice/crio-80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd WatchSource:0}: Error finding container 80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd: Status 404 returned error can't find the container with id 80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd Mar 08 03:23:58.869787 master-0 kubenswrapper[7387]: I0308 03:23:58.869644 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.869787 master-0 kubenswrapper[7387]: I0308 03:23:58.869692 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrjx\" (UniqueName: \"kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.870323 master-0 kubenswrapper[7387]: I0308 03:23:58.869827 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.870323 master-0 kubenswrapper[7387]: I0308 03:23:58.869855 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.870323 master-0 kubenswrapper[7387]: I0308 03:23:58.869876 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.870323 master-0 kubenswrapper[7387]: I0308 03:23:58.869959 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.870612 master-0 kubenswrapper[7387]: I0308 03:23:58.870426 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:23:58.870785 master-0 kubenswrapper[7387]: E0308 03:23:58.870707 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:23:58.870953 master-0 kubenswrapper[7387]: E0308 03:23:58.870839 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:24:06.870808895 +0000 UTC m=+783.265284616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:23:58.871129 master-0 kubenswrapper[7387]: I0308 03:23:58.870991 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.871129 master-0 kubenswrapper[7387]: I0308 03:23:58.870991 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.874819 master-0 kubenswrapper[7387]: I0308 03:23:58.874784 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.887381 master-0 kubenswrapper[7387]: I0308 03:23:58.887294 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrjx\" (UniqueName: \"kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx\") pod \"cluster-cloud-controller-manager-operator-559568b945-56tkc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:58.963700 master-0 kubenswrapper[7387]: I0308 03:23:58.963182 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:23:59.062707 master-0 kubenswrapper[7387]: I0308 03:23:59.059698 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7"] Mar 08 03:23:59.062707 master-0 kubenswrapper[7387]: I0308 03:23:59.061351 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.064196 master-0 kubenswrapper[7387]: I0308 03:23:59.063729 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 03:23:59.064196 master-0 kubenswrapper[7387]: I0308 03:23:59.063967 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 03:23:59.069063 master-0 kubenswrapper[7387]: I0308 03:23:59.066229 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-s25xz" Mar 08 03:23:59.069063 master-0 kubenswrapper[7387]: I0308 03:23:59.066423 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 03:23:59.124005 master-0 kubenswrapper[7387]: I0308 03:23:59.120777 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7"] Mar 08 03:23:59.175727 master-0 kubenswrapper[7387]: I0308 03:23:59.175684 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.175961 master-0 kubenswrapper[7387]: I0308 03:23:59.175946 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.176120 master-0 kubenswrapper[7387]: I0308 03:23:59.176107 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fw25\" (UniqueName: \"kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.176204 master-0 kubenswrapper[7387]: I0308 03:23:59.176190 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.285036 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fw25\" (UniqueName: \"kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.285138 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.285202 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.285265 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: E0308 03:23:59.285582 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: E0308 03:23:59.286312 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:23:59.78627959 +0000 UTC m=+776.180755281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.286315 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.287751 master-0 kubenswrapper[7387]: I0308 03:23:59.286843 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.297480 master-0 kubenswrapper[7387]: I0308 03:23:59.297325 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" event={"ID":"965f8eef-c5af-499b-b1db-cf63072781cc","Type":"ContainerStarted","Data":"31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e"} Mar 08 03:23:59.303389 master-0 kubenswrapper[7387]: I0308 03:23:59.303296 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" event={"ID":"81abc17a-8a51-44e2-a5df-5ddb394a9fa6","Type":"ContainerStarted","Data":"74780e40f2b583e58def76aa97098f629b82af3d78b0be6cbeb4b5ffe46ba364"} Mar 08 03:23:59.303469 master-0 kubenswrapper[7387]: I0308 03:23:59.303397 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" event={"ID":"81abc17a-8a51-44e2-a5df-5ddb394a9fa6","Type":"ContainerStarted","Data":"8520a5f64276e58759b21a4f5abc65748412aaf732608a2bdda90bcabbccfe1e"} Mar 08 03:23:59.303469 master-0 kubenswrapper[7387]: I0308 03:23:59.303456 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" event={"ID":"81abc17a-8a51-44e2-a5df-5ddb394a9fa6","Type":"ContainerStarted","Data":"80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd"} Mar 08 03:23:59.303996 master-0 kubenswrapper[7387]: I0308 03:23:59.303780 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fw25\" (UniqueName: \"kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.306366 master-0 kubenswrapper[7387]: I0308 03:23:59.306123 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" event={"ID":"2728b91e-d59a-4e85-b245-0f297e9377f9","Type":"ContainerStarted","Data":"cd205a040d032b191e7f07df4a3f791df390b5a5d5098d634b2bcb3100b4a7bb"} Mar 08 03:23:59.307793 master-0 kubenswrapper[7387]: I0308 03:23:59.307539 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerStarted","Data":"11a536000b80400c7bcaa1e52cfab58145a4e4f9f3de39066de64d0e1157a40f"} Mar 08 03:23:59.308881 master-0 kubenswrapper[7387]: I0308 03:23:59.308858 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerStarted","Data":"d4e4aeefdf39f017a9a18ec3cddd2921a038fba8d62a37c77c77a1f991e845ee"} Mar 08 03:23:59.308963 master-0 kubenswrapper[7387]: I0308 03:23:59.308883 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerStarted","Data":"1bc524d4935db97fb50be5674147f8f9cecf357fca9acfe424caa68101eaec3d"} Mar 08 03:23:59.327928 master-0 kubenswrapper[7387]: I0308 03:23:59.322638 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" podStartSLOduration=2.322616898 podStartE2EDuration="2.322616898s" podCreationTimestamp="2026-03-08 03:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:23:59.317193566 +0000 UTC m=+775.711669247" watchObservedRunningTime="2026-03-08 03:23:59.322616898 +0000 UTC m=+775.717092579" Mar 08 03:23:59.795797 master-0 kubenswrapper[7387]: I0308 03:23:59.795721 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:23:59.796524 master-0 kubenswrapper[7387]: E0308 03:23:59.795916 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:23:59.796524 master-0 kubenswrapper[7387]: E0308 03:23:59.795988 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:00.795969574 +0000 UTC m=+777.190445245 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:24:00.100199 master-0 kubenswrapper[7387]: I0308 03:24:00.099981 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:00.100199 master-0 kubenswrapper[7387]: E0308 03:24:00.100186 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:00.100523 master-0 kubenswrapper[7387]: E0308 03:24:00.100271 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:24:04.100252117 +0000 UTC m=+780.494727798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:00.809827 master-0 kubenswrapper[7387]: I0308 03:24:00.809457 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:00.809827 master-0 kubenswrapper[7387]: E0308 03:24:00.809641 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:24:00.822950 master-0 kubenswrapper[7387]: E0308 03:24:00.809866 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:02.809849549 +0000 UTC m=+779.204325240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:24:01.317633 master-0 kubenswrapper[7387]: I0308 03:24:01.317568 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:24:01.317837 master-0 kubenswrapper[7387]: E0308 03:24:01.317791 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:24:01.317878 master-0 kubenswrapper[7387]: E0308 03:24:01.317847 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:09.317829099 +0000 UTC m=+785.712304790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:24:02.532459 master-0 kubenswrapper[7387]: I0308 03:24:02.532404 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:24:02.532899 master-0 kubenswrapper[7387]: E0308 03:24:02.532643 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:24:02.532899 master-0 kubenswrapper[7387]: E0308 03:24:02.532776 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:10.53274183 +0000 UTC m=+786.927217551 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:24:02.605349 master-0 kubenswrapper[7387]: I0308 03:24:02.605287 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" podStartSLOduration=5.585819551 podStartE2EDuration="7.605265623s" podCreationTimestamp="2026-03-08 03:23:55 +0000 UTC" firstStartedPulling="2026-03-08 03:23:56.38271785 +0000 UTC m=+772.777193541" lastFinishedPulling="2026-03-08 03:23:58.402163932 +0000 UTC m=+774.796639613" observedRunningTime="2026-03-08 03:23:59.346252085 +0000 UTC m=+775.740727766" watchObservedRunningTime="2026-03-08 03:24:02.605265623 +0000 UTC m=+778.999741304" Mar 08 03:24:02.606038 master-0 kubenswrapper[7387]: I0308 03:24:02.606018 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xv682"] Mar 08 03:24:02.606811 master-0 kubenswrapper[7387]: I0308 03:24:02.606796 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.610075 master-0 kubenswrapper[7387]: I0308 03:24:02.610021 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-wvdjh" Mar 08 03:24:02.610408 master-0 kubenswrapper[7387]: I0308 03:24:02.610370 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 03:24:02.741649 master-0 kubenswrapper[7387]: I0308 03:24:02.741590 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.741649 master-0 kubenswrapper[7387]: I0308 03:24:02.741644 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.741960 master-0 kubenswrapper[7387]: I0308 03:24:02.741925 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rtt8\" (UniqueName: \"kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.742114 master-0 kubenswrapper[7387]: I0308 03:24:02.742083 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843049 master-0 kubenswrapper[7387]: I0308 03:24:02.842990 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtt8\" (UniqueName: \"kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843164 master-0 kubenswrapper[7387]: I0308 03:24:02.843095 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843222 master-0 kubenswrapper[7387]: I0308 03:24:02.843167 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:02.843222 master-0 kubenswrapper[7387]: I0308 03:24:02.843204 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843305 master-0 kubenswrapper[7387]: I0308 03:24:02.843231 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843389 master-0 kubenswrapper[7387]: I0308 03:24:02.843343 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.843755 master-0 kubenswrapper[7387]: E0308 03:24:02.843692 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:24:02.843755 master-0 kubenswrapper[7387]: E0308 03:24:02.843747 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:06.843732338 +0000 UTC m=+783.238208019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:24:02.846443 master-0 kubenswrapper[7387]: I0308 03:24:02.844387 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.846518 master-0 kubenswrapper[7387]: I0308 03:24:02.846479 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.861946 master-0 kubenswrapper[7387]: I0308 03:24:02.861884 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtt8\" (UniqueName: \"kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.980955 master-0 kubenswrapper[7387]: I0308 03:24:02.980881 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:24:02.997938 master-0 kubenswrapper[7387]: W0308 03:24:02.997881 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fafb070_7914_41c2_a8b2_e609a0e5bf9f.slice/crio-b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663 WatchSource:0}: Error finding container b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663: Status 404 returned error can't find the container with id b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663 Mar 08 03:24:03.340914 master-0 kubenswrapper[7387]: I0308 03:24:03.339715 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xv682" event={"ID":"7fafb070-7914-41c2-a8b2-e609a0e5bf9f","Type":"ContainerStarted","Data":"1eaeaf0ead54e71b4466bf17b84f875f65ed8003c26141a37a7ba24852facf65"} Mar 08 03:24:03.340914 master-0 kubenswrapper[7387]: I0308 03:24:03.339772 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xv682" event={"ID":"7fafb070-7914-41c2-a8b2-e609a0e5bf9f","Type":"ContainerStarted","Data":"5b7eb839330da40dda7cc37d3d5537476b9d42c7f38d2db78047fd4371885b02"} Mar 08 03:24:03.340914 master-0 kubenswrapper[7387]: I0308 03:24:03.339782 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xv682" event={"ID":"7fafb070-7914-41c2-a8b2-e609a0e5bf9f","Type":"ContainerStarted","Data":"b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663"} Mar 08 03:24:03.344951 master-0 kubenswrapper[7387]: I0308 03:24:03.343501 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" event={"ID":"965f8eef-c5af-499b-b1db-cf63072781cc","Type":"ContainerStarted","Data":"148123547b19a17f13384ac0f521efe52ca11a8ba51861fa9546df274d15fce9"} Mar 08 03:24:03.344951 master-0 kubenswrapper[7387]: I0308 03:24:03.344788 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" event={"ID":"2728b91e-d59a-4e85-b245-0f297e9377f9","Type":"ContainerStarted","Data":"b4185e1d0f2f95c6a9df7b27b993524a8893ce06520676f0b8d760044b63fa25"} Mar 08 03:24:03.346128 master-0 kubenswrapper[7387]: I0308 03:24:03.346088 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerStarted","Data":"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5"} Mar 08 03:24:03.346128 master-0 kubenswrapper[7387]: I0308 03:24:03.346125 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerStarted","Data":"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794"} Mar 08 03:24:03.348916 master-0 kubenswrapper[7387]: I0308 03:24:03.348866 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:24:03.349028 master-0 kubenswrapper[7387]: E0308 03:24:03.348999 7387 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: secret "control-plane-machine-set-operator-tls" not found Mar 08 03:24:03.349073 master-0 kubenswrapper[7387]: E0308 03:24:03.349054 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls podName:c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:19.349038588 +0000 UTC m=+795.743514269 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-zljww" (UID: "c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6") : secret "control-plane-machine-set-operator-tls" not found Mar 08 03:24:03.362752 master-0 kubenswrapper[7387]: I0308 03:24:03.362702 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xv682" podStartSLOduration=1.362686274 podStartE2EDuration="1.362686274s" podCreationTimestamp="2026-03-08 03:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:03.359757628 +0000 UTC m=+779.754233309" watchObservedRunningTime="2026-03-08 03:24:03.362686274 +0000 UTC m=+779.757161955" Mar 08 03:24:03.380453 master-0 kubenswrapper[7387]: I0308 03:24:03.379138 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" podStartSLOduration=3.189988517 podStartE2EDuration="7.379117653s" podCreationTimestamp="2026-03-08 03:23:56 +0000 UTC" firstStartedPulling="2026-03-08 03:23:58.394765119 +0000 UTC m=+774.789240800" lastFinishedPulling="2026-03-08 03:24:02.583894245 +0000 UTC m=+778.978369936" observedRunningTime="2026-03-08 03:24:03.378873307 +0000 UTC m=+779.773348998" watchObservedRunningTime="2026-03-08 03:24:03.379117653 +0000 UTC m=+779.773593334" Mar 08 03:24:03.403672 master-0 kubenswrapper[7387]: I0308 03:24:03.403614 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" podStartSLOduration=2.174399169 podStartE2EDuration="6.403596342s" podCreationTimestamp="2026-03-08 03:23:57 +0000 UTC" firstStartedPulling="2026-03-08 03:23:58.356164681 +0000 UTC m=+774.750640362" lastFinishedPulling="2026-03-08 03:24:02.585361854 +0000 UTC m=+778.979837535" observedRunningTime="2026-03-08 03:24:03.402397011 +0000 UTC m=+779.796872682" watchObservedRunningTime="2026-03-08 03:24:03.403596342 +0000 UTC m=+779.798072013" Mar 08 03:24:04.159154 master-0 kubenswrapper[7387]: I0308 03:24:04.159071 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:04.159718 master-0 kubenswrapper[7387]: E0308 03:24:04.159430 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:04.159718 master-0 kubenswrapper[7387]: E0308 03:24:04.159502 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:24:12.159478133 +0000 UTC m=+788.553953854 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:04.361823 master-0 kubenswrapper[7387]: I0308 03:24:04.361720 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/0.log" Mar 08 03:24:04.364042 master-0 kubenswrapper[7387]: I0308 03:24:04.363969 7387 generic.go:334] "Generic (PLEG): container finished" podID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerID="2b4a21f20b5c978db065b0d628b749b773ff1ca19664940c89b9d6bb1db08358" exitCode=1 Mar 08 03:24:04.364350 master-0 kubenswrapper[7387]: I0308 03:24:04.364270 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerDied","Data":"2b4a21f20b5c978db065b0d628b749b773ff1ca19664940c89b9d6bb1db08358"} Mar 08 03:24:04.366748 master-0 kubenswrapper[7387]: I0308 03:24:04.366702 7387 scope.go:117] "RemoveContainer" containerID="2b4a21f20b5c978db065b0d628b749b773ff1ca19664940c89b9d6bb1db08358" Mar 08 03:24:05.373408 master-0 kubenswrapper[7387]: I0308 03:24:05.373336 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/1.log" Mar 08 03:24:05.374278 master-0 kubenswrapper[7387]: I0308 03:24:05.374247 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/0.log" Mar 08 03:24:05.375423 master-0 kubenswrapper[7387]: I0308 03:24:05.375357 7387 generic.go:334] "Generic (PLEG): container finished" podID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" exitCode=1 Mar 08 03:24:05.375574 master-0 kubenswrapper[7387]: I0308 03:24:05.375431 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerDied","Data":"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa"} Mar 08 03:24:05.375574 master-0 kubenswrapper[7387]: I0308 03:24:05.375488 7387 scope.go:117] "RemoveContainer" containerID="2b4a21f20b5c978db065b0d628b749b773ff1ca19664940c89b9d6bb1db08358" Mar 08 03:24:05.377174 master-0 kubenswrapper[7387]: I0308 03:24:05.376323 7387 scope.go:117] "RemoveContainer" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:05.377174 master-0 kubenswrapper[7387]: E0308 03:24:05.376720 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-559568b945-56tkc_openshift-cloud-controller-manager-operator(f650cb41-406a-45e4-996d-3baa7acff8bc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" Mar 08 03:24:06.386177 master-0 kubenswrapper[7387]: I0308 03:24:06.386096 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/1.log" Mar 08 03:24:06.388264 master-0 kubenswrapper[7387]: I0308 03:24:06.388190 7387 scope.go:117] "RemoveContainer" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:06.388563 master-0 kubenswrapper[7387]: E0308 03:24:06.388499 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy pod=cluster-cloud-controller-manager-operator-559568b945-56tkc_openshift-cloud-controller-manager-operator(f650cb41-406a-45e4-996d-3baa7acff8bc)\"" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" Mar 08 03:24:06.942415 master-0 kubenswrapper[7387]: I0308 03:24:06.942355 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") pod \"machine-approver-955fcfb87-6hrqx\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:24:06.942835 master-0 kubenswrapper[7387]: I0308 03:24:06.942800 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:06.943104 master-0 kubenswrapper[7387]: E0308 03:24:06.942639 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:24:06.943208 master-0 kubenswrapper[7387]: E0308 03:24:06.943173 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls podName:31fa65e4-4348-426c-8f41-150c99ee4d6a nodeName:}" failed. No retries permitted until 2026-03-08 03:24:22.943145073 +0000 UTC m=+799.337620784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls") pod "machine-approver-955fcfb87-6hrqx" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a") : secret "machine-approver-tls" not found Mar 08 03:24:06.943289 master-0 kubenswrapper[7387]: E0308 03:24:06.942950 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:24:06.943489 master-0 kubenswrapper[7387]: E0308 03:24:06.943429 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:14.943372329 +0000 UTC m=+791.337848050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:24:07.021541 master-0 kubenswrapper[7387]: I0308 03:24:07.021479 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz"] Mar 08 03:24:07.022582 master-0 kubenswrapper[7387]: I0308 03:24:07.022545 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.026055 master-0 kubenswrapper[7387]: I0308 03:24:07.026005 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 03:24:07.026622 master-0 kubenswrapper[7387]: I0308 03:24:07.026583 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-h5rwm" Mar 08 03:24:07.053847 master-0 kubenswrapper[7387]: I0308 03:24:07.053779 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz"] Mar 08 03:24:07.146602 master-0 kubenswrapper[7387]: I0308 03:24:07.146518 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.146951 master-0 kubenswrapper[7387]: I0308 03:24:07.146612 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.146951 master-0 kubenswrapper[7387]: I0308 03:24:07.146824 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkckt\" (UniqueName: \"kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.248966 master-0 kubenswrapper[7387]: I0308 03:24:07.248781 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.249622 master-0 kubenswrapper[7387]: I0308 03:24:07.249541 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.249869 master-0 kubenswrapper[7387]: I0308 03:24:07.249823 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkckt\" (UniqueName: \"kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.250752 master-0 kubenswrapper[7387]: I0308 03:24:07.250691 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.255765 master-0 kubenswrapper[7387]: I0308 03:24:07.255730 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.280147 master-0 kubenswrapper[7387]: I0308 03:24:07.280069 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkckt\" (UniqueName: \"kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.356312 master-0 kubenswrapper[7387]: I0308 03:24:07.356225 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:24:07.809153 master-0 kubenswrapper[7387]: I0308 03:24:07.809033 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz"] Mar 08 03:24:07.820382 master-0 kubenswrapper[7387]: W0308 03:24:07.820323 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b9f2d1_da5c_46b5_b131_d206fa37d436.slice/crio-3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803 WatchSource:0}: Error finding container 3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803: Status 404 returned error can't find the container with id 3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803 Mar 08 03:24:08.178877 master-0 kubenswrapper[7387]: I0308 03:24:08.178756 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j"] Mar 08 03:24:08.179627 master-0 kubenswrapper[7387]: I0308 03:24:08.179578 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:24:08.181815 master-0 kubenswrapper[7387]: I0308 03:24:08.181748 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2"] Mar 08 03:24:08.196005 master-0 kubenswrapper[7387]: I0308 03:24:08.192152 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:08.200929 master-0 kubenswrapper[7387]: I0308 03:24:08.197741 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 03:24:08.205684 master-0 kubenswrapper[7387]: I0308 03:24:08.205643 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-tkxj9"] Mar 08 03:24:08.208389 master-0 kubenswrapper[7387]: I0308 03:24:08.206713 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.211406 master-0 kubenswrapper[7387]: I0308 03:24:08.210517 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j"] Mar 08 03:24:08.211406 master-0 kubenswrapper[7387]: I0308 03:24:08.211033 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 03:24:08.211406 master-0 kubenswrapper[7387]: I0308 03:24:08.211298 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 03:24:08.211406 master-0 kubenswrapper[7387]: I0308 03:24:08.211355 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 03:24:08.211695 master-0 kubenswrapper[7387]: I0308 03:24:08.211563 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 03:24:08.211695 master-0 kubenswrapper[7387]: I0308 03:24:08.211599 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 03:24:08.217591 master-0 kubenswrapper[7387]: I0308 03:24:08.213187 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 03:24:08.235063 master-0 kubenswrapper[7387]: I0308 03:24:08.234024 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2"] Mar 08 03:24:08.239743 master-0 kubenswrapper[7387]: I0308 03:24:08.239712 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-fhncs"] Mar 08 03:24:08.244680 master-0 kubenswrapper[7387]: I0308 03:24:08.240465 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.244680 master-0 kubenswrapper[7387]: I0308 03:24:08.241860 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 03:24:08.254182 master-0 kubenswrapper[7387]: I0308 03:24:08.245263 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 03:24:08.254182 master-0 kubenswrapper[7387]: I0308 03:24:08.245555 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 03:24:08.254182 master-0 kubenswrapper[7387]: I0308 03:24:08.245615 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-g676s" Mar 08 03:24:08.254182 master-0 kubenswrapper[7387]: I0308 03:24:08.252252 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fhncs"] Mar 08 03:24:08.269144 master-0 kubenswrapper[7387]: I0308 03:24:08.269095 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.269309 master-0 kubenswrapper[7387]: I0308 03:24:08.269152 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.269309 master-0 kubenswrapper[7387]: I0308 03:24:08.269201 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhc2q\" (UniqueName: \"kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q\") pod \"network-check-source-7c67b67d47-6bd2j\" (UID: \"c474b370-c291-4662-b57c-a20f77931c1b\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:24:08.269309 master-0 kubenswrapper[7387]: I0308 03:24:08.269257 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:08.269309 master-0 kubenswrapper[7387]: I0308 03:24:08.269284 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcml\" (UniqueName: \"kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.269426 master-0 kubenswrapper[7387]: I0308 03:24:08.269316 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.269426 master-0 kubenswrapper[7387]: I0308 03:24:08.269336 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.269426 master-0 kubenswrapper[7387]: I0308 03:24:08.269357 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.269426 master-0 kubenswrapper[7387]: I0308 03:24:08.269391 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwdh\" (UniqueName: \"kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.370454 master-0 kubenswrapper[7387]: I0308 03:24:08.370329 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.370454 master-0 kubenswrapper[7387]: I0308 03:24:08.370397 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.370454 master-0 kubenswrapper[7387]: I0308 03:24:08.370441 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.372264 master-0 kubenswrapper[7387]: I0308 03:24:08.372223 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.372336 master-0 kubenswrapper[7387]: I0308 03:24:08.370781 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwdh\" (UniqueName: \"kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.372449 master-0 kubenswrapper[7387]: I0308 03:24:08.372420 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.372520 master-0 kubenswrapper[7387]: I0308 03:24:08.372486 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.372603 master-0 kubenswrapper[7387]: I0308 03:24:08.372583 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhc2q\" (UniqueName: \"kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q\") pod \"network-check-source-7c67b67d47-6bd2j\" (UID: \"c474b370-c291-4662-b57c-a20f77931c1b\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:24:08.372719 master-0 kubenswrapper[7387]: I0308 03:24:08.372693 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:08.372763 master-0 kubenswrapper[7387]: I0308 03:24:08.372749 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcml\" (UniqueName: \"kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.373167 master-0 kubenswrapper[7387]: E0308 03:24:08.373117 7387 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 08 03:24:08.373226 master-0 kubenswrapper[7387]: E0308 03:24:08.373205 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:08.873186071 +0000 UTC m=+785.267661762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : secret "canary-serving-cert" not found Mar 08 03:24:08.375306 master-0 kubenswrapper[7387]: I0308 03:24:08.375274 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.375506 master-0 kubenswrapper[7387]: I0308 03:24:08.375437 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.378984 master-0 kubenswrapper[7387]: I0308 03:24:08.378943 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:08.379826 master-0 kubenswrapper[7387]: I0308 03:24:08.379779 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.394242 master-0 kubenswrapper[7387]: I0308 03:24:08.394191 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwdh\" (UniqueName: \"kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.398270 master-0 kubenswrapper[7387]: I0308 03:24:08.398219 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcml\" (UniqueName: \"kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.400030 master-0 kubenswrapper[7387]: I0308 03:24:08.399976 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhc2q\" (UniqueName: \"kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q\") pod \"network-check-source-7c67b67d47-6bd2j\" (UID: \"c474b370-c291-4662-b57c-a20f77931c1b\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:24:08.410742 master-0 kubenswrapper[7387]: I0308 03:24:08.410675 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" event={"ID":"42b9f2d1-da5c-46b5-b131-d206fa37d436","Type":"ContainerStarted","Data":"b973f9705fdbb130a085c8a29ccc76182c1570cf682f9b040abafc9dfa718ba4"} Mar 08 03:24:08.410742 master-0 kubenswrapper[7387]: I0308 03:24:08.410728 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" event={"ID":"42b9f2d1-da5c-46b5-b131-d206fa37d436","Type":"ContainerStarted","Data":"9ebffe5493b09d3a093aa85180c37071c3a0b4e8c5ef6f4c98982166c5ae432d"} Mar 08 03:24:08.410742 master-0 kubenswrapper[7387]: I0308 03:24:08.410742 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" event={"ID":"42b9f2d1-da5c-46b5-b131-d206fa37d436","Type":"ContainerStarted","Data":"3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803"} Mar 08 03:24:08.443926 master-0 kubenswrapper[7387]: I0308 03:24:08.443828 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" podStartSLOduration=2.443805925 podStartE2EDuration="2.443805925s" podCreationTimestamp="2026-03-08 03:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:08.440657472 +0000 UTC m=+784.835133173" watchObservedRunningTime="2026-03-08 03:24:08.443805925 +0000 UTC m=+784.838281616" Mar 08 03:24:08.547003 master-0 kubenswrapper[7387]: I0308 03:24:08.546938 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:24:08.571660 master-0 kubenswrapper[7387]: I0308 03:24:08.571614 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:08.596239 master-0 kubenswrapper[7387]: I0308 03:24:08.596163 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:08.616400 master-0 kubenswrapper[7387]: W0308 03:24:08.614322 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode878dbfe_0ef8_4ee1_a8b9_3bea56ec449d.slice/crio-f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38 WatchSource:0}: Error finding container f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38: Status 404 returned error can't find the container with id f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38 Mar 08 03:24:08.883390 master-0 kubenswrapper[7387]: I0308 03:24:08.883332 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:08.884530 master-0 kubenswrapper[7387]: E0308 03:24:08.883599 7387 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 08 03:24:08.884530 master-0 kubenswrapper[7387]: E0308 03:24:08.883714 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:09.883655286 +0000 UTC m=+786.278130987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : secret "canary-serving-cert" not found Mar 08 03:24:08.993677 master-0 kubenswrapper[7387]: W0308 03:24:08.993608 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc474b370_c291_4662_b57c_a20f77931c1b.slice/crio-90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467 WatchSource:0}: Error finding container 90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467: Status 404 returned error can't find the container with id 90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467 Mar 08 03:24:08.997461 master-0 kubenswrapper[7387]: I0308 03:24:08.997372 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j"] Mar 08 03:24:09.056634 master-0 kubenswrapper[7387]: I0308 03:24:09.056575 7387 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:24:09.074068 master-0 kubenswrapper[7387]: I0308 03:24:09.073999 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2"] Mar 08 03:24:09.084644 master-0 kubenswrapper[7387]: W0308 03:24:09.084543 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8985dac1_38cf_41d1_b7cd_c2bfaf0f6ebc.slice/crio-f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714 WatchSource:0}: Error finding container f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714: Status 404 returned error can't find the container with id f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714 Mar 08 03:24:09.404950 master-0 kubenswrapper[7387]: I0308 03:24:09.399795 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:24:09.404950 master-0 kubenswrapper[7387]: E0308 03:24:09.400227 7387 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: secret "cloud-credential-operator-serving-cert" not found Mar 08 03:24:09.404950 master-0 kubenswrapper[7387]: E0308 03:24:09.400313 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert podName:38287d1a-b784-4ce9-9650-949d92469519 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:25.400292051 +0000 UTC m=+801.794767742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-9hjss" (UID: "38287d1a-b784-4ce9-9650-949d92469519") : secret "cloud-credential-operator-serving-cert" not found Mar 08 03:24:09.417804 master-0 kubenswrapper[7387]: I0308 03:24:09.417742 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerStarted","Data":"f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38"} Mar 08 03:24:09.419205 master-0 kubenswrapper[7387]: I0308 03:24:09.419157 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" event={"ID":"c474b370-c291-4662-b57c-a20f77931c1b","Type":"ContainerStarted","Data":"42af2338e0af46524b24589f1950a511fdb57e6cd05cdb03bb40b75721fcb0f4"} Mar 08 03:24:09.419261 master-0 kubenswrapper[7387]: I0308 03:24:09.419212 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" event={"ID":"c474b370-c291-4662-b57c-a20f77931c1b","Type":"ContainerStarted","Data":"90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467"} Mar 08 03:24:09.422584 master-0 kubenswrapper[7387]: I0308 03:24:09.422538 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" event={"ID":"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc","Type":"ContainerStarted","Data":"f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714"} Mar 08 03:24:09.445942 master-0 kubenswrapper[7387]: I0308 03:24:09.444895 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" podStartSLOduration=835.444870764 podStartE2EDuration="13m55.444870764s" podCreationTimestamp="2026-03-08 03:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:09.442373089 +0000 UTC m=+785.836848770" watchObservedRunningTime="2026-03-08 03:24:09.444870764 +0000 UTC m=+785.839346455" Mar 08 03:24:09.909955 master-0 kubenswrapper[7387]: I0308 03:24:09.906635 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:09.909955 master-0 kubenswrapper[7387]: E0308 03:24:09.906817 7387 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 08 03:24:09.909955 master-0 kubenswrapper[7387]: E0308 03:24:09.906869 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:11.906850983 +0000 UTC m=+788.301326664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : secret "canary-serving-cert" not found Mar 08 03:24:10.616572 master-0 kubenswrapper[7387]: I0308 03:24:10.616507 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:24:10.616865 master-0 kubenswrapper[7387]: E0308 03:24:10.616719 7387 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: secret "samples-operator-tls" not found Mar 08 03:24:10.616865 master-0 kubenswrapper[7387]: E0308 03:24:10.616810 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls podName:27f5a0ab-3811-4c17-adc1-9ca48ae18ee1 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:26.616787134 +0000 UTC m=+803.011262825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-fb844" (UID: "27f5a0ab-3811-4c17-adc1-9ca48ae18ee1") : secret "samples-operator-tls" not found Mar 08 03:24:11.436567 master-0 kubenswrapper[7387]: I0308 03:24:11.436452 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" event={"ID":"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc","Type":"ContainerStarted","Data":"807aa532b1b6c906b7d675d78e7181ee367bf27598e44840bea989fd020ad93d"} Mar 08 03:24:11.437129 master-0 kubenswrapper[7387]: I0308 03:24:11.436735 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:11.439302 master-0 kubenswrapper[7387]: I0308 03:24:11.439250 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerStarted","Data":"7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3"} Mar 08 03:24:11.443129 master-0 kubenswrapper[7387]: I0308 03:24:11.443053 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:24:11.466344 master-0 kubenswrapper[7387]: I0308 03:24:11.466221 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" podStartSLOduration=747.470387189 podStartE2EDuration="12m29.466196286s" podCreationTimestamp="2026-03-08 03:11:42 +0000 UTC" firstStartedPulling="2026-03-08 03:24:09.097711002 +0000 UTC m=+785.492186703" lastFinishedPulling="2026-03-08 03:24:11.093520119 +0000 UTC m=+787.487995800" observedRunningTime="2026-03-08 03:24:11.459261235 +0000 UTC m=+787.853736946" watchObservedRunningTime="2026-03-08 03:24:11.466196286 +0000 UTC m=+787.860671997" Mar 08 03:24:11.466938 master-0 kubenswrapper[7387]: I0308 03:24:11.466869 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fstmq"] Mar 08 03:24:11.468208 master-0 kubenswrapper[7387]: I0308 03:24:11.468158 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.470207 master-0 kubenswrapper[7387]: I0308 03:24:11.470156 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 03:24:11.470461 master-0 kubenswrapper[7387]: I0308 03:24:11.470424 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lf8gs" Mar 08 03:24:11.470669 master-0 kubenswrapper[7387]: I0308 03:24:11.470634 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 03:24:11.492593 master-0 kubenswrapper[7387]: I0308 03:24:11.492493 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podStartSLOduration=760.025014646 podStartE2EDuration="12m42.492466172s" podCreationTimestamp="2026-03-08 03:11:29 +0000 UTC" firstStartedPulling="2026-03-08 03:24:08.616848702 +0000 UTC m=+785.011324413" lastFinishedPulling="2026-03-08 03:24:11.084300258 +0000 UTC m=+787.478775939" observedRunningTime="2026-03-08 03:24:11.487847772 +0000 UTC m=+787.882323483" watchObservedRunningTime="2026-03-08 03:24:11.492466172 +0000 UTC m=+787.886941883" Mar 08 03:24:11.534486 master-0 kubenswrapper[7387]: I0308 03:24:11.534405 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.534655 master-0 kubenswrapper[7387]: I0308 03:24:11.534532 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.534655 master-0 kubenswrapper[7387]: I0308 03:24:11.534565 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdpq\" (UniqueName: \"kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.597564 master-0 kubenswrapper[7387]: I0308 03:24:11.597513 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:11.599047 master-0 kubenswrapper[7387]: I0308 03:24:11.599018 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:11.599047 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:11.599047 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:11.599047 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:11.599182 master-0 kubenswrapper[7387]: I0308 03:24:11.599057 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:11.635411 master-0 kubenswrapper[7387]: I0308 03:24:11.635363 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.635411 master-0 kubenswrapper[7387]: I0308 03:24:11.635425 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.635662 master-0 kubenswrapper[7387]: I0308 03:24:11.635443 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdpq\" (UniqueName: \"kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.638851 master-0 kubenswrapper[7387]: I0308 03:24:11.638814 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.640402 master-0 kubenswrapper[7387]: I0308 03:24:11.640385 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.651406 master-0 kubenswrapper[7387]: I0308 03:24:11.651371 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdpq\" (UniqueName: \"kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.745278 master-0 kubenswrapper[7387]: I0308 03:24:11.745181 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx"] Mar 08 03:24:11.746280 master-0 kubenswrapper[7387]: I0308 03:24:11.746263 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.749153 master-0 kubenswrapper[7387]: I0308 03:24:11.749115 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 03:24:11.749245 master-0 kubenswrapper[7387]: I0308 03:24:11.749154 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 03:24:11.749771 master-0 kubenswrapper[7387]: I0308 03:24:11.749748 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 03:24:11.750089 master-0 kubenswrapper[7387]: I0308 03:24:11.750057 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-278m6" Mar 08 03:24:11.782876 master-0 kubenswrapper[7387]: I0308 03:24:11.782821 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx"] Mar 08 03:24:11.783892 master-0 kubenswrapper[7387]: I0308 03:24:11.783849 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:24:11.838592 master-0 kubenswrapper[7387]: I0308 03:24:11.838518 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.838592 master-0 kubenswrapper[7387]: I0308 03:24:11.838571 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.838773 master-0 kubenswrapper[7387]: I0308 03:24:11.838714 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.838773 master-0 kubenswrapper[7387]: I0308 03:24:11.838748 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctdbq\" (UniqueName: \"kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: I0308 03:24:11.940087 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: I0308 03:24:11.940152 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: I0308 03:24:11.940195 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: I0308 03:24:11.940292 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: I0308 03:24:11.940318 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctdbq\" (UniqueName: \"kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: E0308 03:24:11.940705 7387 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 08 03:24:11.940930 master-0 kubenswrapper[7387]: E0308 03:24:11.940752 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:12.440735863 +0000 UTC m=+788.835211554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : secret "prometheus-operator-tls" not found Mar 08 03:24:11.942552 master-0 kubenswrapper[7387]: I0308 03:24:11.941818 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.942552 master-0 kubenswrapper[7387]: E0308 03:24:11.941922 7387 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 08 03:24:11.942552 master-0 kubenswrapper[7387]: E0308 03:24:11.941957 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:15.941945875 +0000 UTC m=+792.336421566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : secret "canary-serving-cert" not found Mar 08 03:24:11.944963 master-0 kubenswrapper[7387]: I0308 03:24:11.944893 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:11.958229 master-0 kubenswrapper[7387]: I0308 03:24:11.958184 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctdbq\" (UniqueName: \"kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:12.198989 master-0 kubenswrapper[7387]: I0308 03:24:12.198883 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx"] Mar 08 03:24:12.200047 master-0 kubenswrapper[7387]: E0308 03:24:12.200002 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[machine-approver-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" podUID="31fa65e4-4348-426c-8f41-150c99ee4d6a" Mar 08 03:24:12.244287 master-0 kubenswrapper[7387]: I0308 03:24:12.244212 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:12.244519 master-0 kubenswrapper[7387]: E0308 03:24:12.244463 7387 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:12.244592 master-0 kubenswrapper[7387]: E0308 03:24:12.244570 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert podName:2ffe00fd-6834-4a5b-8b0b-b467d284f23c nodeName:}" failed. No retries permitted until 2026-03-08 03:24:28.244548194 +0000 UTC m=+804.639023885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert") pod "cluster-autoscaler-operator-69576476f7-jd7rl" (UID: "2ffe00fd-6834-4a5b-8b0b-b467d284f23c") : secret "cluster-autoscaler-operator-cert" not found Mar 08 03:24:12.447336 master-0 kubenswrapper[7387]: I0308 03:24:12.447242 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:24:12.448131 master-0 kubenswrapper[7387]: I0308 03:24:12.447369 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:12.448131 master-0 kubenswrapper[7387]: I0308 03:24:12.447390 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fstmq" event={"ID":"99923acc-a1b4-4fbc-a636-f9c145856b01","Type":"ContainerStarted","Data":"e0426b70108c5c7e359d94197a6d936e2d6c71a0c24ce080f9be7cc29ba9f731"} Mar 08 03:24:12.448131 master-0 kubenswrapper[7387]: I0308 03:24:12.447445 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fstmq" event={"ID":"99923acc-a1b4-4fbc-a636-f9c145856b01","Type":"ContainerStarted","Data":"f30b40b5dee25f4cfef68deaa81953cc276010f2fb26052242518f7b573301d1"} Mar 08 03:24:12.448131 master-0 kubenswrapper[7387]: E0308 03:24:12.447725 7387 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 08 03:24:12.448131 master-0 kubenswrapper[7387]: E0308 03:24:12.447809 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:13.447785029 +0000 UTC m=+789.842260750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : secret "prometheus-operator-tls" not found Mar 08 03:24:12.456467 master-0 kubenswrapper[7387]: I0308 03:24:12.456350 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:24:12.473961 master-0 kubenswrapper[7387]: I0308 03:24:12.473818 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fstmq" podStartSLOduration=1.473794438 podStartE2EDuration="1.473794438s" podCreationTimestamp="2026-03-08 03:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:12.466789545 +0000 UTC m=+788.861265266" watchObservedRunningTime="2026-03-08 03:24:12.473794438 +0000 UTC m=+788.868270149" Mar 08 03:24:12.548434 master-0 kubenswrapper[7387]: I0308 03:24:12.548329 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxhl\" (UniqueName: \"kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl\") pod \"31fa65e4-4348-426c-8f41-150c99ee4d6a\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " Mar 08 03:24:12.548434 master-0 kubenswrapper[7387]: I0308 03:24:12.548433 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config\") pod \"31fa65e4-4348-426c-8f41-150c99ee4d6a\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " Mar 08 03:24:12.549116 master-0 kubenswrapper[7387]: I0308 03:24:12.549052 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config\") pod \"31fa65e4-4348-426c-8f41-150c99ee4d6a\" (UID: \"31fa65e4-4348-426c-8f41-150c99ee4d6a\") " Mar 08 03:24:12.549871 master-0 kubenswrapper[7387]: I0308 03:24:12.549232 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config" (OuterVolumeSpecName: "config") pod "31fa65e4-4348-426c-8f41-150c99ee4d6a" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:12.550414 master-0 kubenswrapper[7387]: I0308 03:24:12.550351 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31fa65e4-4348-426c-8f41-150c99ee4d6a" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:12.550719 master-0 kubenswrapper[7387]: I0308 03:24:12.550684 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:12.553468 master-0 kubenswrapper[7387]: I0308 03:24:12.553399 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl" (OuterVolumeSpecName: "kube-api-access-prxhl") pod "31fa65e4-4348-426c-8f41-150c99ee4d6a" (UID: "31fa65e4-4348-426c-8f41-150c99ee4d6a"). InnerVolumeSpecName "kube-api-access-prxhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:24:12.599961 master-0 kubenswrapper[7387]: I0308 03:24:12.599851 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:12.599961 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:12.599961 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:12.599961 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:12.600418 master-0 kubenswrapper[7387]: I0308 03:24:12.599973 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:12.652132 master-0 kubenswrapper[7387]: I0308 03:24:12.652043 7387 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31fa65e4-4348-426c-8f41-150c99ee4d6a-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:12.652132 master-0 kubenswrapper[7387]: I0308 03:24:12.652107 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxhl\" (UniqueName: \"kubernetes.io/projected/31fa65e4-4348-426c-8f41-150c99ee4d6a-kube-api-access-prxhl\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:12.811815 master-0 kubenswrapper[7387]: I0308 03:24:12.811729 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/2.log" Mar 08 03:24:13.014873 master-0 kubenswrapper[7387]: I0308 03:24:13.014822 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/3.log" Mar 08 03:24:13.210407 master-0 kubenswrapper[7387]: I0308 03:24:13.210194 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-tkxj9_e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/router/0.log" Mar 08 03:24:13.405791 master-0 kubenswrapper[7387]: I0308 03:24:13.405746 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/fix-audit-permissions/0.log" Mar 08 03:24:13.451155 master-0 kubenswrapper[7387]: I0308 03:24:13.451094 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx" Mar 08 03:24:13.464823 master-0 kubenswrapper[7387]: I0308 03:24:13.464745 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:13.465249 master-0 kubenswrapper[7387]: E0308 03:24:13.465205 7387 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 08 03:24:13.465349 master-0 kubenswrapper[7387]: E0308 03:24:13.465300 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:15.465273767 +0000 UTC m=+791.859749528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : secret "prometheus-operator-tls" not found Mar 08 03:24:13.494803 master-0 kubenswrapper[7387]: I0308 03:24:13.494731 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx"] Mar 08 03:24:13.500164 master-0 kubenswrapper[7387]: I0308 03:24:13.500106 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-6hrqx"] Mar 08 03:24:13.522570 master-0 kubenswrapper[7387]: I0308 03:24:13.522502 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws"] Mar 08 03:24:13.523470 master-0 kubenswrapper[7387]: I0308 03:24:13.523443 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.525448 master-0 kubenswrapper[7387]: I0308 03:24:13.525416 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 03:24:13.525513 master-0 kubenswrapper[7387]: I0308 03:24:13.525443 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 03:24:13.525642 master-0 kubenswrapper[7387]: I0308 03:24:13.525624 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 03:24:13.525887 master-0 kubenswrapper[7387]: I0308 03:24:13.525858 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vbs7r" Mar 08 03:24:13.526144 master-0 kubenswrapper[7387]: I0308 03:24:13.526077 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 03:24:13.530296 master-0 kubenswrapper[7387]: I0308 03:24:13.530261 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 03:24:13.565357 master-0 kubenswrapper[7387]: I0308 03:24:13.565299 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.565544 master-0 kubenswrapper[7387]: I0308 03:24:13.565377 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.565627 master-0 kubenswrapper[7387]: I0308 03:24:13.565588 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.565696 master-0 kubenswrapper[7387]: I0308 03:24:13.565679 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4t2j\" (UniqueName: \"kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.565820 master-0 kubenswrapper[7387]: I0308 03:24:13.565797 7387 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/31fa65e4-4348-426c-8f41-150c99ee4d6a-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:13.599559 master-0 kubenswrapper[7387]: I0308 03:24:13.599513 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:13.599559 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:13.599559 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:13.599559 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:13.599829 master-0 kubenswrapper[7387]: I0308 03:24:13.599581 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:13.608916 master-0 kubenswrapper[7387]: I0308 03:24:13.608832 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/oauth-apiserver/0.log" Mar 08 03:24:13.666746 master-0 kubenswrapper[7387]: I0308 03:24:13.666683 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4t2j\" (UniqueName: \"kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.666970 master-0 kubenswrapper[7387]: I0308 03:24:13.666777 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.666970 master-0 kubenswrapper[7387]: I0308 03:24:13.666858 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.666970 master-0 kubenswrapper[7387]: I0308 03:24:13.666949 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.667145 master-0 kubenswrapper[7387]: E0308 03:24:13.667102 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:24:13.667201 master-0 kubenswrapper[7387]: E0308 03:24:13.667187 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:14.167165207 +0000 UTC m=+790.561640988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : secret "machine-approver-tls" not found Mar 08 03:24:13.667879 master-0 kubenswrapper[7387]: I0308 03:24:13.667837 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.668090 master-0 kubenswrapper[7387]: I0308 03:24:13.668032 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.693264 master-0 kubenswrapper[7387]: I0308 03:24:13.693204 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4t2j\" (UniqueName: \"kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:13.769371 master-0 kubenswrapper[7387]: I0308 03:24:13.769300 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa65e4-4348-426c-8f41-150c99ee4d6a" path="/var/lib/kubelet/pods/31fa65e4-4348-426c-8f41-150c99ee4d6a/volumes" Mar 08 03:24:13.807333 master-0 kubenswrapper[7387]: I0308 03:24:13.807279 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/3.log" Mar 08 03:24:14.008102 master-0 kubenswrapper[7387]: I0308 03:24:14.007525 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/4.log" Mar 08 03:24:14.173282 master-0 kubenswrapper[7387]: I0308 03:24:14.173089 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:14.173535 master-0 kubenswrapper[7387]: E0308 03:24:14.173337 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:24:14.173535 master-0 kubenswrapper[7387]: E0308 03:24:14.173424 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:15.173394601 +0000 UTC m=+791.567870322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : secret "machine-approver-tls" not found Mar 08 03:24:14.212489 master-0 kubenswrapper[7387]: I0308 03:24:14.212444 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 08 03:24:14.409134 master-0 kubenswrapper[7387]: I0308 03:24:14.409088 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 08 03:24:14.599896 master-0 kubenswrapper[7387]: I0308 03:24:14.599635 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:14.599896 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:14.599896 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:14.599896 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:14.600718 master-0 kubenswrapper[7387]: I0308 03:24:14.599988 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:14.606977 master-0 kubenswrapper[7387]: I0308 03:24:14.606949 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 08 03:24:14.807328 master-0 kubenswrapper[7387]: I0308 03:24:14.807262 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 03:24:14.986469 master-0 kubenswrapper[7387]: I0308 03:24:14.986420 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:14.986772 master-0 kubenswrapper[7387]: E0308 03:24:14.986711 7387 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Mar 08 03:24:14.987036 master-0 kubenswrapper[7387]: E0308 03:24:14.986851 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:30.986822174 +0000 UTC m=+807.381297885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : secret "machine-api-operator-tls" not found Mar 08 03:24:15.014268 master-0 kubenswrapper[7387]: I0308 03:24:15.014221 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 03:24:15.189336 master-0 kubenswrapper[7387]: I0308 03:24:15.189260 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:15.189553 master-0 kubenswrapper[7387]: E0308 03:24:15.189458 7387 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: secret "machine-approver-tls" not found Mar 08 03:24:15.189605 master-0 kubenswrapper[7387]: E0308 03:24:15.189563 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:17.189533015 +0000 UTC m=+793.584008706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : secret "machine-approver-tls" not found Mar 08 03:24:15.210313 master-0 kubenswrapper[7387]: I0308 03:24:15.210225 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 03:24:15.471369 master-0 kubenswrapper[7387]: I0308 03:24:15.471294 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 08 03:24:15.492941 master-0 kubenswrapper[7387]: I0308 03:24:15.492850 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:15.493337 master-0 kubenswrapper[7387]: E0308 03:24:15.493240 7387 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 08 03:24:15.493445 master-0 kubenswrapper[7387]: E0308 03:24:15.493342 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:19.493312324 +0000 UTC m=+795.887788025 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : secret "prometheus-operator-tls" not found Mar 08 03:24:15.598996 master-0 kubenswrapper[7387]: I0308 03:24:15.598888 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:15.598996 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:15.598996 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:15.598996 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:15.599282 master-0 kubenswrapper[7387]: I0308 03:24:15.599029 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:15.944020 master-0 kubenswrapper[7387]: I0308 03:24:15.943933 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 03:24:15.958041 master-0 kubenswrapper[7387]: I0308 03:24:15.957939 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_ed2e0194-6b50-4478-aba4-21193d2c18aa/installer/0.log" Mar 08 03:24:16.001687 master-0 kubenswrapper[7387]: I0308 03:24:16.001623 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:16.002223 master-0 kubenswrapper[7387]: E0308 03:24:16.001980 7387 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 08 03:24:16.002223 master-0 kubenswrapper[7387]: E0308 03:24:16.002109 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:24.002075674 +0000 UTC m=+800.396551405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : secret "canary-serving-cert" not found Mar 08 03:24:16.010044 master-0 kubenswrapper[7387]: I0308 03:24:16.009978 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/2.log" Mar 08 03:24:16.207985 master-0 kubenswrapper[7387]: I0308 03:24:16.207877 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/3.log" Mar 08 03:24:16.407966 master-0 kubenswrapper[7387]: I0308 03:24:16.405650 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 08 03:24:16.598993 master-0 kubenswrapper[7387]: I0308 03:24:16.598952 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:16.598993 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:16.598993 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:16.598993 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:16.599366 master-0 kubenswrapper[7387]: I0308 03:24:16.599338 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:16.622079 master-0 kubenswrapper[7387]: I0308 03:24:16.622053 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 08 03:24:16.807222 master-0 kubenswrapper[7387]: I0308 03:24:16.807150 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 08 03:24:17.012639 master-0 kubenswrapper[7387]: I0308 03:24:17.012582 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:24:17.215404 master-0 kubenswrapper[7387]: I0308 03:24:17.215311 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:24:17.220872 master-0 kubenswrapper[7387]: I0308 03:24:17.220738 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:17.225251 master-0 kubenswrapper[7387]: I0308 03:24:17.225179 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:17.413316 master-0 kubenswrapper[7387]: I0308 03:24:17.413182 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/2.log" Mar 08 03:24:17.445408 master-0 kubenswrapper[7387]: I0308 03:24:17.445359 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:24:17.599415 master-0 kubenswrapper[7387]: I0308 03:24:17.599320 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:17.599415 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:17.599415 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:17.599415 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:17.599865 master-0 kubenswrapper[7387]: I0308 03:24:17.599419 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:17.619183 master-0 kubenswrapper[7387]: I0308 03:24:17.618808 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/3.log" Mar 08 03:24:17.811789 master-0 kubenswrapper[7387]: I0308 03:24:17.811554 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/4.log" Mar 08 03:24:18.023187 master-0 kubenswrapper[7387]: I0308 03:24:18.022712 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/1.log" Mar 08 03:24:18.215651 master-0 kubenswrapper[7387]: I0308 03:24:18.215496 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/5.log" Mar 08 03:24:18.278245 master-0 kubenswrapper[7387]: I0308 03:24:18.278080 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc"] Mar 08 03:24:18.278525 master-0 kubenswrapper[7387]: I0308 03:24:18.278425 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="cluster-cloud-controller-manager" containerID="cri-o://baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" gracePeriod=30 Mar 08 03:24:18.278609 master-0 kubenswrapper[7387]: I0308 03:24:18.278579 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="config-sync-controllers" containerID="cri-o://a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" gracePeriod=30 Mar 08 03:24:18.414940 master-0 kubenswrapper[7387]: I0308 03:24:18.412574 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/2.log" Mar 08 03:24:18.533990 master-0 kubenswrapper[7387]: I0308 03:24:18.529620 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" event={"ID":"b537a655-ef73-40b5-b228-95ab6cfdedf2","Type":"ContainerStarted","Data":"ea6e8d32f51b27123c4b03e6721cf75561090b0ccc69a1d0cebfb90797a58faa"} Mar 08 03:24:18.533990 master-0 kubenswrapper[7387]: I0308 03:24:18.529674 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" event={"ID":"b537a655-ef73-40b5-b228-95ab6cfdedf2","Type":"ContainerStarted","Data":"343f5202f680e6489744b1829ff30f9c82b78fc022fbaf1325e4c8fa7cfe17d8"} Mar 08 03:24:18.537923 master-0 kubenswrapper[7387]: I0308 03:24:18.534737 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/1.log" Mar 08 03:24:18.547391 master-0 kubenswrapper[7387]: I0308 03:24:18.547339 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:24:18.549139 master-0 kubenswrapper[7387]: I0308 03:24:18.549009 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-56tkc_f650cb41-406a-45e4-996d-3baa7acff8bc/kube-rbac-proxy/1.log" Mar 08 03:24:18.551219 master-0 kubenswrapper[7387]: I0308 03:24:18.551194 7387 generic.go:334] "Generic (PLEG): container finished" podID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerID="a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" exitCode=0 Mar 08 03:24:18.551219 master-0 kubenswrapper[7387]: I0308 03:24:18.551219 7387 generic.go:334] "Generic (PLEG): container finished" podID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerID="baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" exitCode=0 Mar 08 03:24:18.551316 master-0 kubenswrapper[7387]: I0308 03:24:18.551241 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerDied","Data":"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5"} Mar 08 03:24:18.551316 master-0 kubenswrapper[7387]: I0308 03:24:18.551273 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerDied","Data":"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794"} Mar 08 03:24:18.551316 master-0 kubenswrapper[7387]: I0308 03:24:18.551312 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" event={"ID":"f650cb41-406a-45e4-996d-3baa7acff8bc","Type":"ContainerDied","Data":"11a536000b80400c7bcaa1e52cfab58145a4e4f9f3de39066de64d0e1157a40f"} Mar 08 03:24:18.551402 master-0 kubenswrapper[7387]: I0308 03:24:18.551329 7387 scope.go:117] "RemoveContainer" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:18.562603 master-0 kubenswrapper[7387]: I0308 03:24:18.562540 7387 scope.go:117] "RemoveContainer" containerID="a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" Mar 08 03:24:18.579767 master-0 kubenswrapper[7387]: I0308 03:24:18.579716 7387 scope.go:117] "RemoveContainer" containerID="baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" Mar 08 03:24:18.593522 master-0 kubenswrapper[7387]: I0308 03:24:18.593359 7387 scope.go:117] "RemoveContainer" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:18.593834 master-0 kubenswrapper[7387]: E0308 03:24:18.593689 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa\": container with ID starting with f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa not found: ID does not exist" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:18.593834 master-0 kubenswrapper[7387]: I0308 03:24:18.593720 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa"} err="failed to get container status \"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa\": rpc error: code = NotFound desc = could not find container \"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa\": container with ID starting with f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa not found: ID does not exist" Mar 08 03:24:18.593834 master-0 kubenswrapper[7387]: I0308 03:24:18.593740 7387 scope.go:117] "RemoveContainer" containerID="a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" Mar 08 03:24:18.594334 master-0 kubenswrapper[7387]: E0308 03:24:18.594293 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5\": container with ID starting with a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5 not found: ID does not exist" containerID="a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" Mar 08 03:24:18.594334 master-0 kubenswrapper[7387]: I0308 03:24:18.594320 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5"} err="failed to get container status \"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5\": rpc error: code = NotFound desc = could not find container \"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5\": container with ID starting with a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5 not found: ID does not exist" Mar 08 03:24:18.594432 master-0 kubenswrapper[7387]: I0308 03:24:18.594334 7387 scope.go:117] "RemoveContainer" containerID="baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" Mar 08 03:24:18.594803 master-0 kubenswrapper[7387]: E0308 03:24:18.594781 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794\": container with ID starting with baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794 not found: ID does not exist" containerID="baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" Mar 08 03:24:18.594867 master-0 kubenswrapper[7387]: I0308 03:24:18.594801 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794"} err="failed to get container status \"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794\": rpc error: code = NotFound desc = could not find container \"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794\": container with ID starting with baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794 not found: ID does not exist" Mar 08 03:24:18.594867 master-0 kubenswrapper[7387]: I0308 03:24:18.594813 7387 scope.go:117] "RemoveContainer" containerID="f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa" Mar 08 03:24:18.595072 master-0 kubenswrapper[7387]: I0308 03:24:18.595051 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa"} err="failed to get container status \"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa\": rpc error: code = NotFound desc = could not find container \"f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa\": container with ID starting with f2c7fa4a8051f17f4b1e2230b60f8ce9d6d8e4749e634e85823d1b34d31700aa not found: ID does not exist" Mar 08 03:24:18.595072 master-0 kubenswrapper[7387]: I0308 03:24:18.595070 7387 scope.go:117] "RemoveContainer" containerID="a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5" Mar 08 03:24:18.595404 master-0 kubenswrapper[7387]: I0308 03:24:18.595383 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5"} err="failed to get container status \"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5\": rpc error: code = NotFound desc = could not find container \"a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5\": container with ID starting with a6400ef35b31d74c89bd20523e01ccc80f99326efc39c0ad1c2089560f7648b5 not found: ID does not exist" Mar 08 03:24:18.595404 master-0 kubenswrapper[7387]: I0308 03:24:18.595402 7387 scope.go:117] "RemoveContainer" containerID="baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794" Mar 08 03:24:18.595752 master-0 kubenswrapper[7387]: I0308 03:24:18.595725 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794"} err="failed to get container status \"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794\": rpc error: code = NotFound desc = could not find container \"baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794\": container with ID starting with baa7f25d9f5a2f070c19a532a1bb8a1def30c62670089a7cee6a2e12e3563794 not found: ID does not exist" Mar 08 03:24:18.597007 master-0 kubenswrapper[7387]: I0308 03:24:18.596962 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:24:18.601058 master-0 kubenswrapper[7387]: I0308 03:24:18.600061 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:18.601058 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:18.601058 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:18.601058 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:18.601058 master-0 kubenswrapper[7387]: I0308 03:24:18.600154 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:18.610939 master-0 kubenswrapper[7387]: I0308 03:24:18.610339 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 08 03:24:18.644691 master-0 kubenswrapper[7387]: I0308 03:24:18.644638 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube\") pod \"f650cb41-406a-45e4-996d-3baa7acff8bc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " Mar 08 03:24:18.644691 master-0 kubenswrapper[7387]: I0308 03:24:18.644689 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsrjx\" (UniqueName: \"kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx\") pod \"f650cb41-406a-45e4-996d-3baa7acff8bc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " Mar 08 03:24:18.644951 master-0 kubenswrapper[7387]: I0308 03:24:18.644717 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls\") pod \"f650cb41-406a-45e4-996d-3baa7acff8bc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " Mar 08 03:24:18.644951 master-0 kubenswrapper[7387]: I0308 03:24:18.644849 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config\") pod \"f650cb41-406a-45e4-996d-3baa7acff8bc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " Mar 08 03:24:18.644951 master-0 kubenswrapper[7387]: I0308 03:24:18.644877 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images\") pod \"f650cb41-406a-45e4-996d-3baa7acff8bc\" (UID: \"f650cb41-406a-45e4-996d-3baa7acff8bc\") " Mar 08 03:24:18.645478 master-0 kubenswrapper[7387]: I0308 03:24:18.645448 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "f650cb41-406a-45e4-996d-3baa7acff8bc" (UID: "f650cb41-406a-45e4-996d-3baa7acff8bc"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:24:18.647310 master-0 kubenswrapper[7387]: I0308 03:24:18.647271 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "f650cb41-406a-45e4-996d-3baa7acff8bc" (UID: "f650cb41-406a-45e4-996d-3baa7acff8bc"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:18.647754 master-0 kubenswrapper[7387]: I0308 03:24:18.647724 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images" (OuterVolumeSpecName: "images") pod "f650cb41-406a-45e4-996d-3baa7acff8bc" (UID: "f650cb41-406a-45e4-996d-3baa7acff8bc"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:18.648709 master-0 kubenswrapper[7387]: I0308 03:24:18.648300 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "f650cb41-406a-45e4-996d-3baa7acff8bc" (UID: "f650cb41-406a-45e4-996d-3baa7acff8bc"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:24:18.650431 master-0 kubenswrapper[7387]: I0308 03:24:18.649438 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx" (OuterVolumeSpecName: "kube-api-access-rsrjx") pod "f650cb41-406a-45e4-996d-3baa7acff8bc" (UID: "f650cb41-406a-45e4-996d-3baa7acff8bc"). InnerVolumeSpecName "kube-api-access-rsrjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:24:18.746120 master-0 kubenswrapper[7387]: I0308 03:24:18.746038 7387 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f650cb41-406a-45e4-996d-3baa7acff8bc-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:18.746120 master-0 kubenswrapper[7387]: I0308 03:24:18.746081 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsrjx\" (UniqueName: \"kubernetes.io/projected/f650cb41-406a-45e4-996d-3baa7acff8bc-kube-api-access-rsrjx\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:18.746120 master-0 kubenswrapper[7387]: I0308 03:24:18.746093 7387 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f650cb41-406a-45e4-996d-3baa7acff8bc-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:18.746120 master-0 kubenswrapper[7387]: I0308 03:24:18.746102 7387 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:18.746120 master-0 kubenswrapper[7387]: I0308 03:24:18.746111 7387 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f650cb41-406a-45e4-996d-3baa7acff8bc-images\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:18.810020 master-0 kubenswrapper[7387]: I0308 03:24:18.809864 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 08 03:24:19.008334 master-0 kubenswrapper[7387]: I0308 03:24:19.008263 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/2.log" Mar 08 03:24:19.213870 master-0 kubenswrapper[7387]: I0308 03:24:19.213773 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/3.log" Mar 08 03:24:19.354217 master-0 kubenswrapper[7387]: I0308 03:24:19.354134 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:24:19.360055 master-0 kubenswrapper[7387]: I0308 03:24:19.359690 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:24:19.408575 master-0 kubenswrapper[7387]: I0308 03:24:19.408522 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/2.log" Mar 08 03:24:19.556574 master-0 kubenswrapper[7387]: I0308 03:24:19.556513 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:19.556787 master-0 kubenswrapper[7387]: I0308 03:24:19.556653 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc" Mar 08 03:24:19.562203 master-0 kubenswrapper[7387]: I0308 03:24:19.562159 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:19.565504 master-0 kubenswrapper[7387]: I0308 03:24:19.565471 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:24:19.599652 master-0 kubenswrapper[7387]: I0308 03:24:19.599611 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:19.599652 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:19.599652 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:19.599652 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:19.599864 master-0 kubenswrapper[7387]: I0308 03:24:19.599694 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:19.610092 master-0 kubenswrapper[7387]: I0308 03:24:19.610033 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/3.log" Mar 08 03:24:19.654927 master-0 kubenswrapper[7387]: I0308 03:24:19.654865 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-7hbhc" Mar 08 03:24:19.657986 master-0 kubenswrapper[7387]: I0308 03:24:19.657875 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc"] Mar 08 03:24:19.661106 master-0 kubenswrapper[7387]: I0308 03:24:19.660315 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-56tkc"] Mar 08 03:24:19.665709 master-0 kubenswrapper[7387]: I0308 03:24:19.663632 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:24:19.699111 master-0 kubenswrapper[7387]: I0308 03:24:19.698994 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc"] Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: E0308 03:24:19.699282 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="cluster-cloud-controller-manager" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699302 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="cluster-cloud-controller-manager" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: E0308 03:24:19.699335 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699344 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: E0308 03:24:19.699357 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="config-sync-controllers" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699365 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="config-sync-controllers" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: E0308 03:24:19.699384 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699392 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699508 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699522 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="config-sync-controllers" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699534 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="cluster-cloud-controller-manager" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.699962 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" containerName="kube-rbac-proxy" Mar 08 03:24:19.700981 master-0 kubenswrapper[7387]: I0308 03:24:19.700773 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.704787 master-0 kubenswrapper[7387]: I0308 03:24:19.703557 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:24:19.704787 master-0 kubenswrapper[7387]: I0308 03:24:19.703751 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 03:24:19.704787 master-0 kubenswrapper[7387]: I0308 03:24:19.703862 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 03:24:19.704787 master-0 kubenswrapper[7387]: I0308 03:24:19.703992 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:24:19.704787 master-0 kubenswrapper[7387]: I0308 03:24:19.704128 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 03:24:19.707609 master-0 kubenswrapper[7387]: I0308 03:24:19.707292 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-qnnnr" Mar 08 03:24:19.760352 master-0 kubenswrapper[7387]: I0308 03:24:19.760292 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.760550 master-0 kubenswrapper[7387]: I0308 03:24:19.760456 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.760550 master-0 kubenswrapper[7387]: I0308 03:24:19.760484 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.760550 master-0 kubenswrapper[7387]: I0308 03:24:19.760514 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.760716 master-0 kubenswrapper[7387]: I0308 03:24:19.760555 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrn6\" (UniqueName: \"kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.766592 master-0 kubenswrapper[7387]: I0308 03:24:19.766548 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f650cb41-406a-45e4-996d-3baa7acff8bc" path="/var/lib/kubelet/pods/f650cb41-406a-45e4-996d-3baa7acff8bc/volumes" Mar 08 03:24:19.806152 master-0 kubenswrapper[7387]: I0308 03:24:19.806094 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/fix-audit-permissions/0.log" Mar 08 03:24:19.861627 master-0 kubenswrapper[7387]: I0308 03:24:19.861567 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.861627 master-0 kubenswrapper[7387]: I0308 03:24:19.861616 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.861858 master-0 kubenswrapper[7387]: I0308 03:24:19.861646 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.861858 master-0 kubenswrapper[7387]: I0308 03:24:19.861741 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.862025 master-0 kubenswrapper[7387]: I0308 03:24:19.861977 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqrn6\" (UniqueName: \"kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.862123 master-0 kubenswrapper[7387]: I0308 03:24:19.862088 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.864394 master-0 kubenswrapper[7387]: I0308 03:24:19.862497 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.864394 master-0 kubenswrapper[7387]: I0308 03:24:19.862929 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.865733 master-0 kubenswrapper[7387]: I0308 03:24:19.865676 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.881384 master-0 kubenswrapper[7387]: I0308 03:24:19.881337 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqrn6\" (UniqueName: \"kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:19.980241 master-0 kubenswrapper[7387]: I0308 03:24:19.980170 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx"] Mar 08 03:24:19.998262 master-0 kubenswrapper[7387]: W0308 03:24:19.998161 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8f3a1e_689b_4107_993a_dde67f4decf2.slice/crio-578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e WatchSource:0}: Error finding container 578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e: Status 404 returned error can't find the container with id 578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e Mar 08 03:24:20.017490 master-0 kubenswrapper[7387]: I0308 03:24:20.017424 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/openshift-apiserver/0.log" Mar 08 03:24:20.042971 master-0 kubenswrapper[7387]: I0308 03:24:20.042861 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:24:20.099120 master-0 kubenswrapper[7387]: I0308 03:24:20.099050 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww"] Mar 08 03:24:20.210696 master-0 kubenswrapper[7387]: I0308 03:24:20.210631 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/openshift-apiserver-check-endpoints/0.log" Mar 08 03:24:20.407187 master-0 kubenswrapper[7387]: I0308 03:24:20.407145 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/3.log" Mar 08 03:24:20.564761 master-0 kubenswrapper[7387]: I0308 03:24:20.564669 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" event={"ID":"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6","Type":"ContainerStarted","Data":"1389ca3c0a68c688490c2796e3b27e9ac02672c5ceeb0cb3aade38fd422867f7"} Mar 08 03:24:20.569231 master-0 kubenswrapper[7387]: I0308 03:24:20.567525 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" event={"ID":"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff","Type":"ContainerStarted","Data":"424917f1a3e8b6e16d958683c556139941ec49cc33d3fc5cfafc082b93c8aab0"} Mar 08 03:24:20.569231 master-0 kubenswrapper[7387]: I0308 03:24:20.567615 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" event={"ID":"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff","Type":"ContainerStarted","Data":"233963ae69c0e92fa376edf193674bed858eb5858aa47b809d66b6f44798a600"} Mar 08 03:24:20.569231 master-0 kubenswrapper[7387]: I0308 03:24:20.567633 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" event={"ID":"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff","Type":"ContainerStarted","Data":"2bd783cbda23be7989b39c47de53b6fd58c76ea7fdfdcd9d506ba6bc622ba3e3"} Mar 08 03:24:20.569231 master-0 kubenswrapper[7387]: I0308 03:24:20.569182 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" event={"ID":"ae8f3a1e-689b-4107-993a-dde67f4decf2","Type":"ContainerStarted","Data":"578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e"} Mar 08 03:24:20.572437 master-0 kubenswrapper[7387]: I0308 03:24:20.572401 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" event={"ID":"b537a655-ef73-40b5-b228-95ab6cfdedf2","Type":"ContainerStarted","Data":"b2bf1f96c69abb910723e2ce05cf88ba62c29d23e19982dd55b5fdb8f01184e9"} Mar 08 03:24:20.600162 master-0 kubenswrapper[7387]: I0308 03:24:20.599726 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:20.600162 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:20.600162 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:20.600162 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:20.600162 master-0 kubenswrapper[7387]: I0308 03:24:20.599820 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:20.603340 master-0 kubenswrapper[7387]: I0308 03:24:20.603240 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" podStartSLOduration=5.83924166 podStartE2EDuration="7.603216585s" podCreationTimestamp="2026-03-08 03:24:13 +0000 UTC" firstStartedPulling="2026-03-08 03:24:17.821145826 +0000 UTC m=+794.215621497" lastFinishedPulling="2026-03-08 03:24:19.585120741 +0000 UTC m=+795.979596422" observedRunningTime="2026-03-08 03:24:20.597382243 +0000 UTC m=+796.991857974" watchObservedRunningTime="2026-03-08 03:24:20.603216585 +0000 UTC m=+796.997692276" Mar 08 03:24:20.616406 master-0 kubenswrapper[7387]: I0308 03:24:20.616001 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/4.log" Mar 08 03:24:20.806022 master-0 kubenswrapper[7387]: I0308 03:24:20.805967 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/2.log" Mar 08 03:24:21.006450 master-0 kubenswrapper[7387]: I0308 03:24:21.006204 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/3.log" Mar 08 03:24:21.209453 master-0 kubenswrapper[7387]: I0308 03:24:21.209341 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-77c5c9d7dd-xtftv_dd1c09ba-b44c-446a-abe0-53ac3e910a77/controller-manager/0.log" Mar 08 03:24:21.412563 master-0 kubenswrapper[7387]: I0308 03:24:21.412469 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-77c5c9d7dd-xtftv_dd1c09ba-b44c-446a-abe0-53ac3e910a77/controller-manager/1.log" Mar 08 03:24:21.593286 master-0 kubenswrapper[7387]: I0308 03:24:21.593209 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" event={"ID":"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff","Type":"ContainerStarted","Data":"bafb1e6b16f845fe8f2581172ee215a9bf91f23ff3b37ac192b433bd41154454"} Mar 08 03:24:21.606282 master-0 kubenswrapper[7387]: I0308 03:24:21.606164 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:21.606282 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:21.606282 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:21.606282 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:21.606282 master-0 kubenswrapper[7387]: I0308 03:24:21.606232 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:21.614613 master-0 kubenswrapper[7387]: I0308 03:24:21.614570 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/1.log" Mar 08 03:24:21.808234 master-0 kubenswrapper[7387]: I0308 03:24:21.808197 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/2.log" Mar 08 03:24:22.012013 master-0 kubenswrapper[7387]: I0308 03:24:22.011974 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-wsswx_5a92a557-d023-4531-b3a3-e559af0fe358/catalog-operator/0.log" Mar 08 03:24:22.214692 master-0 kubenswrapper[7387]: I0308 03:24:22.214649 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-t659n_d68278f6-59d5-4bbf-b969-e47635ffd4cc/olm-operator/0.log" Mar 08 03:24:22.599652 master-0 kubenswrapper[7387]: I0308 03:24:22.599587 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:22.599652 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:22.599652 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:22.599652 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:22.600358 master-0 kubenswrapper[7387]: I0308 03:24:22.599659 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:22.603074 master-0 kubenswrapper[7387]: I0308 03:24:22.603028 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" event={"ID":"ae8f3a1e-689b-4107-993a-dde67f4decf2","Type":"ContainerStarted","Data":"dce67d968a89e7997846f653766fe4173bfb3ed74b8f2003b2160e5d9f4ba6d2"} Mar 08 03:24:22.603194 master-0 kubenswrapper[7387]: I0308 03:24:22.603080 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" event={"ID":"ae8f3a1e-689b-4107-993a-dde67f4decf2","Type":"ContainerStarted","Data":"cf572d7f8a085edec0412a25c5ac8300141cbe0d5dbde9afb0c296ffe93d7cd5"} Mar 08 03:24:22.605360 master-0 kubenswrapper[7387]: I0308 03:24:22.605296 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" event={"ID":"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6","Type":"ContainerStarted","Data":"26407c3ca61b97ca6a5ab23516c6982614940f72f59b58cd3af72397aa976645"} Mar 08 03:24:22.613101 master-0 kubenswrapper[7387]: I0308 03:24:22.613054 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/0.log" Mar 08 03:24:22.633549 master-0 kubenswrapper[7387]: I0308 03:24:22.633461 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" podStartSLOduration=3.63343791 podStartE2EDuration="3.63343791s" podCreationTimestamp="2026-03-08 03:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:21.657267729 +0000 UTC m=+798.051743420" watchObservedRunningTime="2026-03-08 03:24:22.63343791 +0000 UTC m=+799.027913611" Mar 08 03:24:22.659286 master-0 kubenswrapper[7387]: I0308 03:24:22.659172 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" podStartSLOduration=9.709050957 podStartE2EDuration="11.65913819s" podCreationTimestamp="2026-03-08 03:24:11 +0000 UTC" firstStartedPulling="2026-03-08 03:24:20.006487319 +0000 UTC m=+796.400963010" lastFinishedPulling="2026-03-08 03:24:21.956574542 +0000 UTC m=+798.351050243" observedRunningTime="2026-03-08 03:24:22.631896559 +0000 UTC m=+799.026372270" watchObservedRunningTime="2026-03-08 03:24:22.65913819 +0000 UTC m=+799.053613871" Mar 08 03:24:22.659856 master-0 kubenswrapper[7387]: I0308 03:24:22.659814 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" podStartSLOduration=33.796402328 podStartE2EDuration="35.659803998s" podCreationTimestamp="2026-03-08 03:23:47 +0000 UTC" firstStartedPulling="2026-03-08 03:24:20.108254175 +0000 UTC m=+796.502729856" lastFinishedPulling="2026-03-08 03:24:21.971655845 +0000 UTC m=+798.366131526" observedRunningTime="2026-03-08 03:24:22.659703785 +0000 UTC m=+799.054179506" watchObservedRunningTime="2026-03-08 03:24:22.659803998 +0000 UTC m=+799.054279679" Mar 08 03:24:22.809817 master-0 kubenswrapper[7387]: I0308 03:24:22.809784 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/kube-rbac-proxy/0.log" Mar 08 03:24:23.120732 master-0 kubenswrapper[7387]: I0308 03:24:23.120672 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:24:23.213615 master-0 kubenswrapper[7387]: I0308 03:24:23.213567 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7fcc847fc6-s2lnw_7a1b7b0d-6e00-485e-86e8-7bd047569328/packageserver/0.log" Mar 08 03:24:23.599516 master-0 kubenswrapper[7387]: I0308 03:24:23.599450 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:23.599516 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:23.599516 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:23.599516 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:23.600467 master-0 kubenswrapper[7387]: I0308 03:24:23.599522 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:24.018134 master-0 kubenswrapper[7387]: I0308 03:24:24.018084 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:24.021973 master-0 kubenswrapper[7387]: I0308 03:24:24.021896 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:24.107390 master-0 kubenswrapper[7387]: I0308 03:24:24.107323 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn"] Mar 08 03:24:24.109100 master-0 kubenswrapper[7387]: I0308 03:24:24.109049 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.111455 master-0 kubenswrapper[7387]: I0308 03:24:24.111421 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7gb49" Mar 08 03:24:24.111540 master-0 kubenswrapper[7387]: I0308 03:24:24.111465 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 03:24:24.111588 master-0 kubenswrapper[7387]: I0308 03:24:24.111420 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 03:24:24.119316 master-0 kubenswrapper[7387]: I0308 03:24:24.119274 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.119419 master-0 kubenswrapper[7387]: I0308 03:24:24.119362 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.119460 master-0 kubenswrapper[7387]: I0308 03:24:24.119425 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.119503 master-0 kubenswrapper[7387]: I0308 03:24:24.119460 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cgc\" (UniqueName: \"kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.130923 master-0 kubenswrapper[7387]: I0308 03:24:24.130811 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-sjs7q"] Mar 08 03:24:24.134965 master-0 kubenswrapper[7387]: I0308 03:24:24.133331 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.138923 master-0 kubenswrapper[7387]: I0308 03:24:24.137082 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 03:24:24.138923 master-0 kubenswrapper[7387]: I0308 03:24:24.138125 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 03:24:24.142930 master-0 kubenswrapper[7387]: I0308 03:24:24.139695 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-46c6c" Mar 08 03:24:24.149944 master-0 kubenswrapper[7387]: I0308 03:24:24.148419 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59"] Mar 08 03:24:24.149944 master-0 kubenswrapper[7387]: I0308 03:24:24.149709 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.164928 master-0 kubenswrapper[7387]: I0308 03:24:24.159851 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 03:24:24.164928 master-0 kubenswrapper[7387]: I0308 03:24:24.160256 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 03:24:24.164928 master-0 kubenswrapper[7387]: I0308 03:24:24.160396 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 03:24:24.164928 master-0 kubenswrapper[7387]: I0308 03:24:24.160544 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-rsc8q" Mar 08 03:24:24.179042 master-0 kubenswrapper[7387]: I0308 03:24:24.178968 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59"] Mar 08 03:24:24.202596 master-0 kubenswrapper[7387]: I0308 03:24:24.202510 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn"] Mar 08 03:24:24.207258 master-0 kubenswrapper[7387]: I0308 03:24:24.207082 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220791 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzgg5\" (UniqueName: \"kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220841 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220865 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220924 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220958 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cgc\" (UniqueName: \"kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.220981 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221011 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221058 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221094 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221115 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221139 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221162 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221200 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221236 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221256 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221283 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zrr\" (UniqueName: \"kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221304 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.221793 master-0 kubenswrapper[7387]: I0308 03:24:24.221343 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.225922 master-0 kubenswrapper[7387]: I0308 03:24:24.224274 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.225922 master-0 kubenswrapper[7387]: I0308 03:24:24.224643 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.225922 master-0 kubenswrapper[7387]: I0308 03:24:24.225511 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.258975 master-0 kubenswrapper[7387]: I0308 03:24:24.252621 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cgc\" (UniqueName: \"kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.326750 master-0 kubenswrapper[7387]: I0308 03:24:24.326683 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.327210 master-0 kubenswrapper[7387]: I0308 03:24:24.327146 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.327334 master-0 kubenswrapper[7387]: I0308 03:24:24.327306 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzgg5\" (UniqueName: \"kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.327380 master-0 kubenswrapper[7387]: I0308 03:24:24.327336 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.327380 master-0 kubenswrapper[7387]: I0308 03:24:24.327361 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.327465 master-0 kubenswrapper[7387]: I0308 03:24:24.327403 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.327852 master-0 kubenswrapper[7387]: E0308 03:24:24.327785 7387 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 08 03:24:24.328107 master-0 kubenswrapper[7387]: E0308 03:24:24.327897 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls podName:beed862c-6283-4568-aa2e-f49b31e30a3b nodeName:}" failed. No retries permitted until 2026-03-08 03:24:24.827850697 +0000 UTC m=+801.222326378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls") pod "node-exporter-sjs7q" (UID: "beed862c-6283-4568-aa2e-f49b31e30a3b") : secret "node-exporter-tls" not found Mar 08 03:24:24.328229 master-0 kubenswrapper[7387]: I0308 03:24:24.328163 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.328229 master-0 kubenswrapper[7387]: I0308 03:24:24.329044 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.328229 master-0 kubenswrapper[7387]: I0308 03:24:24.329977 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.328229 master-0 kubenswrapper[7387]: I0308 03:24:24.330081 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.328229 master-0 kubenswrapper[7387]: I0308 03:24:24.330108 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.330336 master-0 kubenswrapper[7387]: I0308 03:24:24.330168 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.330336 master-0 kubenswrapper[7387]: I0308 03:24:24.330230 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.330336 master-0 kubenswrapper[7387]: I0308 03:24:24.330262 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.330336 master-0 kubenswrapper[7387]: I0308 03:24:24.330299 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.330731 master-0 kubenswrapper[7387]: I0308 03:24:24.330685 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.330831 master-0 kubenswrapper[7387]: I0308 03:24:24.330811 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.331323 master-0 kubenswrapper[7387]: I0308 03:24:24.330875 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zrr\" (UniqueName: \"kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.331323 master-0 kubenswrapper[7387]: I0308 03:24:24.330918 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.331323 master-0 kubenswrapper[7387]: E0308 03:24:24.330918 7387 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 08 03:24:24.331323 master-0 kubenswrapper[7387]: E0308 03:24:24.330994 7387 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls podName:bfc9ae4f-eb67-4ed1-97a1-d67e839fd601 nodeName:}" failed. No retries permitted until 2026-03-08 03:24:24.830971799 +0000 UTC m=+801.225447480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-vxn59" (UID: "bfc9ae4f-eb67-4ed1-97a1-d67e839fd601") : secret "kube-state-metrics-tls" not found Mar 08 03:24:24.331689 master-0 kubenswrapper[7387]: I0308 03:24:24.331668 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.331734 master-0 kubenswrapper[7387]: I0308 03:24:24.331675 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.331813 master-0 kubenswrapper[7387]: I0308 03:24:24.331793 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.340837 master-0 kubenswrapper[7387]: I0308 03:24:24.340794 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.351744 master-0 kubenswrapper[7387]: I0308 03:24:24.350857 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.354285 master-0 kubenswrapper[7387]: I0308 03:24:24.354248 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzgg5\" (UniqueName: \"kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.369637 master-0 kubenswrapper[7387]: I0308 03:24:24.369585 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zrr\" (UniqueName: \"kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.428105 master-0 kubenswrapper[7387]: I0308 03:24:24.425350 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:24:24.600411 master-0 kubenswrapper[7387]: I0308 03:24:24.600127 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:24.600411 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:24.600411 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:24.600411 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:24.600411 master-0 kubenswrapper[7387]: I0308 03:24:24.600185 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:24.633916 master-0 kubenswrapper[7387]: I0308 03:24:24.633797 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-fhncs"] Mar 08 03:24:24.837058 master-0 kubenswrapper[7387]: I0308 03:24:24.836933 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.837393 master-0 kubenswrapper[7387]: I0308 03:24:24.837348 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.840474 master-0 kubenswrapper[7387]: I0308 03:24:24.840431 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:24.841742 master-0 kubenswrapper[7387]: I0308 03:24:24.841716 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:24.876157 master-0 kubenswrapper[7387]: I0308 03:24:24.876107 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn"] Mar 08 03:24:24.882399 master-0 kubenswrapper[7387]: W0308 03:24:24.882345 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ca7ace_9608_4686_a039_a6ba6e3ab837.slice/crio-995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713 WatchSource:0}: Error finding container 995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713: Status 404 returned error can't find the container with id 995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713 Mar 08 03:24:25.071716 master-0 kubenswrapper[7387]: I0308 03:24:25.071667 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:24:25.087339 master-0 kubenswrapper[7387]: W0308 03:24:25.087244 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeed862c_6283_4568_aa2e_f49b31e30a3b.slice/crio-2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae WatchSource:0}: Error finding container 2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae: Status 404 returned error can't find the container with id 2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae Mar 08 03:24:25.129124 master-0 kubenswrapper[7387]: I0308 03:24:25.127866 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:24:25.464452 master-0 kubenswrapper[7387]: I0308 03:24:25.462858 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:24:25.466601 master-0 kubenswrapper[7387]: I0308 03:24:25.466533 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:24:25.581197 master-0 kubenswrapper[7387]: I0308 03:24:25.581035 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-ftthh" Mar 08 03:24:25.597170 master-0 kubenswrapper[7387]: I0308 03:24:25.589363 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:24:25.601795 master-0 kubenswrapper[7387]: I0308 03:24:25.601642 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:25.601795 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:25.601795 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:25.601795 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:25.604815 master-0 kubenswrapper[7387]: I0308 03:24:25.604620 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:25.620688 master-0 kubenswrapper[7387]: I0308 03:24:25.620613 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59"] Mar 08 03:24:25.623774 master-0 kubenswrapper[7387]: W0308 03:24:25.623725 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc9ae4f_eb67_4ed1_97a1_d67e839fd601.slice/crio-15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2 WatchSource:0}: Error finding container 15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2: Status 404 returned error can't find the container with id 15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2 Mar 08 03:24:25.637421 master-0 kubenswrapper[7387]: I0308 03:24:25.636153 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fhncs" event={"ID":"6176b631-3911-41cd-beb6-5bc2e924c3a7","Type":"ContainerStarted","Data":"569ac197a4944eb4bf02557e663c7552516d1abb887ba6b6ba1ca2ea61964c91"} Mar 08 03:24:25.637421 master-0 kubenswrapper[7387]: I0308 03:24:25.636213 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-fhncs" event={"ID":"6176b631-3911-41cd-beb6-5bc2e924c3a7","Type":"ContainerStarted","Data":"c5a4db52edd426e8cea689535b3e9c7e16767678dd5ad98d256870c1726c756c"} Mar 08 03:24:25.641632 master-0 kubenswrapper[7387]: I0308 03:24:25.641555 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sjs7q" event={"ID":"beed862c-6283-4568-aa2e-f49b31e30a3b","Type":"ContainerStarted","Data":"2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae"} Mar 08 03:24:25.644477 master-0 kubenswrapper[7387]: I0308 03:24:25.644326 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" event={"ID":"16ca7ace-9608-4686-a039-a6ba6e3ab837","Type":"ContainerStarted","Data":"c9d989ca37229a3b00d884196a5caa0fe42e1e3277d8e2b88785783aff8bce6f"} Mar 08 03:24:25.644477 master-0 kubenswrapper[7387]: I0308 03:24:25.644396 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" event={"ID":"16ca7ace-9608-4686-a039-a6ba6e3ab837","Type":"ContainerStarted","Data":"34fc61eb4d35fe23773c3072c27fc1331ee248777fd59d21ed3dd7761a6fba14"} Mar 08 03:24:25.644477 master-0 kubenswrapper[7387]: I0308 03:24:25.644418 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" event={"ID":"16ca7ace-9608-4686-a039-a6ba6e3ab837","Type":"ContainerStarted","Data":"995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713"} Mar 08 03:24:25.652412 master-0 kubenswrapper[7387]: I0308 03:24:25.652268 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" event={"ID":"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601","Type":"ContainerStarted","Data":"15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2"} Mar 08 03:24:25.669879 master-0 kubenswrapper[7387]: I0308 03:24:25.669773 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-fhncs" podStartSLOduration=17.669747275 podStartE2EDuration="17.669747275s" podCreationTimestamp="2026-03-08 03:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:25.661310174 +0000 UTC m=+802.055785875" watchObservedRunningTime="2026-03-08 03:24:25.669747275 +0000 UTC m=+802.064222986" Mar 08 03:24:26.090352 master-0 kubenswrapper[7387]: I0308 03:24:26.090292 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss"] Mar 08 03:24:26.515481 master-0 kubenswrapper[7387]: W0308 03:24:26.515422 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38287d1a_b784_4ce9_9650_949d92469519.slice/crio-7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b WatchSource:0}: Error finding container 7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b: Status 404 returned error can't find the container with id 7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b Mar 08 03:24:26.598309 master-0 kubenswrapper[7387]: I0308 03:24:26.598268 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:26.598309 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:26.598309 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:26.598309 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:26.598309 master-0 kubenswrapper[7387]: I0308 03:24:26.598324 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:26.669220 master-0 kubenswrapper[7387]: I0308 03:24:26.669172 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" event={"ID":"38287d1a-b784-4ce9-9650-949d92469519","Type":"ContainerStarted","Data":"7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b"} Mar 08 03:24:26.690216 master-0 kubenswrapper[7387]: I0308 03:24:26.690123 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:24:26.698688 master-0 kubenswrapper[7387]: I0308 03:24:26.696009 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:24:26.900384 master-0 kubenswrapper[7387]: I0308 03:24:26.900327 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-bhtmv" Mar 08 03:24:26.908487 master-0 kubenswrapper[7387]: I0308 03:24:26.908432 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:24:27.307504 master-0 kubenswrapper[7387]: I0308 03:24:27.307396 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844"] Mar 08 03:24:27.599078 master-0 kubenswrapper[7387]: I0308 03:24:27.599036 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:27.599078 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:27.599078 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:27.599078 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:27.599224 master-0 kubenswrapper[7387]: I0308 03:24:27.599097 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:27.678120 master-0 kubenswrapper[7387]: I0308 03:24:27.678047 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" event={"ID":"38287d1a-b784-4ce9-9650-949d92469519","Type":"ContainerStarted","Data":"a1db26152aeeaa3d39e6479a7cc882f4e93d18dc14a3a79d78d215777535479b"} Mar 08 03:24:27.682441 master-0 kubenswrapper[7387]: I0308 03:24:27.679373 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" event={"ID":"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1","Type":"ContainerStarted","Data":"9cf19296313ccb0a9f49159a002819b23609566806a638c368fc850d3dc27bd2"} Mar 08 03:24:27.682441 master-0 kubenswrapper[7387]: I0308 03:24:27.681146 7387 generic.go:334] "Generic (PLEG): container finished" podID="beed862c-6283-4568-aa2e-f49b31e30a3b" containerID="d1050d392274bd46ce1eee6b5d4efe54cfd2cef89c6e2cd2b5d4626e3c237593" exitCode=0 Mar 08 03:24:27.682441 master-0 kubenswrapper[7387]: I0308 03:24:27.681204 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sjs7q" event={"ID":"beed862c-6283-4568-aa2e-f49b31e30a3b","Type":"ContainerDied","Data":"d1050d392274bd46ce1eee6b5d4efe54cfd2cef89c6e2cd2b5d4626e3c237593"} Mar 08 03:24:27.685018 master-0 kubenswrapper[7387]: I0308 03:24:27.684988 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" event={"ID":"16ca7ace-9608-4686-a039-a6ba6e3ab837","Type":"ContainerStarted","Data":"d8d6eef97c55bd6c1ab5be123fde58c8a1d9d8038ef6436d22af28ab603b3481"} Mar 08 03:24:27.730077 master-0 kubenswrapper[7387]: I0308 03:24:27.729926 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" podStartSLOduration=2.399261326 podStartE2EDuration="3.729885829s" podCreationTimestamp="2026-03-08 03:24:24 +0000 UTC" firstStartedPulling="2026-03-08 03:24:25.240685975 +0000 UTC m=+801.635161666" lastFinishedPulling="2026-03-08 03:24:26.571310488 +0000 UTC m=+802.965786169" observedRunningTime="2026-03-08 03:24:27.729716675 +0000 UTC m=+804.124192366" watchObservedRunningTime="2026-03-08 03:24:27.729885829 +0000 UTC m=+804.124361510" Mar 08 03:24:28.314740 master-0 kubenswrapper[7387]: I0308 03:24:28.314654 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:28.317895 master-0 kubenswrapper[7387]: I0308 03:24:28.317844 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:28.448983 master-0 kubenswrapper[7387]: I0308 03:24:28.448889 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p5nps" Mar 08 03:24:28.457873 master-0 kubenswrapper[7387]: I0308 03:24:28.457832 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:24:28.598258 master-0 kubenswrapper[7387]: I0308 03:24:28.598222 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:28.598258 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:28.598258 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:28.598258 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:28.598416 master-0 kubenswrapper[7387]: I0308 03:24:28.598280 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:28.694262 master-0 kubenswrapper[7387]: I0308 03:24:28.694034 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sjs7q" event={"ID":"beed862c-6283-4568-aa2e-f49b31e30a3b","Type":"ContainerStarted","Data":"23113644db3486b5baf111f9c034f0beee0d4a97dfc7fd092a365cc9557740be"} Mar 08 03:24:28.694262 master-0 kubenswrapper[7387]: I0308 03:24:28.694084 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-sjs7q" event={"ID":"beed862c-6283-4568-aa2e-f49b31e30a3b","Type":"ContainerStarted","Data":"8efd8bd5f09db8c7090c0ffbb84a44760aad132166c9a147bfb8e509a57dd50c"} Mar 08 03:24:28.699003 master-0 kubenswrapper[7387]: I0308 03:24:28.698979 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" event={"ID":"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601","Type":"ContainerStarted","Data":"d58e56842a18e12b6fd1a155822241273348d201533cc61450ecd69d8d0400fa"} Mar 08 03:24:28.699071 master-0 kubenswrapper[7387]: I0308 03:24:28.699006 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" event={"ID":"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601","Type":"ContainerStarted","Data":"e5a48e019815ad4e4043090b8fa18362da30f70a66b730a7258c43e1ed294245"} Mar 08 03:24:28.699071 master-0 kubenswrapper[7387]: I0308 03:24:28.699018 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" event={"ID":"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601","Type":"ContainerStarted","Data":"4a326aee05c33068d1b36472178d0e0a87bc3ec42be85fc4f49aad10ef978452"} Mar 08 03:24:28.714937 master-0 kubenswrapper[7387]: I0308 03:24:28.713608 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-sjs7q" podStartSLOduration=3.23996997 podStartE2EDuration="4.713590886s" podCreationTimestamp="2026-03-08 03:24:24 +0000 UTC" firstStartedPulling="2026-03-08 03:24:25.08917774 +0000 UTC m=+801.483653421" lastFinishedPulling="2026-03-08 03:24:26.562798656 +0000 UTC m=+802.957274337" observedRunningTime="2026-03-08 03:24:28.711271786 +0000 UTC m=+805.105747457" watchObservedRunningTime="2026-03-08 03:24:28.713590886 +0000 UTC m=+805.108066557" Mar 08 03:24:28.917781 master-0 kubenswrapper[7387]: I0308 03:24:28.917647 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" podStartSLOduration=2.945233258 podStartE2EDuration="4.917627052s" podCreationTimestamp="2026-03-08 03:24:24 +0000 UTC" firstStartedPulling="2026-03-08 03:24:25.627153873 +0000 UTC m=+802.021629594" lastFinishedPulling="2026-03-08 03:24:27.599547697 +0000 UTC m=+803.994023388" observedRunningTime="2026-03-08 03:24:28.73287385 +0000 UTC m=+805.127349541" watchObservedRunningTime="2026-03-08 03:24:28.917627052 +0000 UTC m=+805.312102733" Mar 08 03:24:28.919130 master-0 kubenswrapper[7387]: I0308 03:24:28.919109 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl"] Mar 08 03:24:28.929215 master-0 kubenswrapper[7387]: W0308 03:24:28.929167 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ffe00fd_6834_4a5b_8b0b_b467d284f23c.slice/crio-b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553 WatchSource:0}: Error finding container b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553: Status 404 returned error can't find the container with id b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553 Mar 08 03:24:29.582615 master-0 kubenswrapper[7387]: I0308 03:24:29.580185 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:24:29.582615 master-0 kubenswrapper[7387]: I0308 03:24:29.580868 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.583925 master-0 kubenswrapper[7387]: I0308 03:24:29.583861 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-d4zhc" Mar 08 03:24:29.584553 master-0 kubenswrapper[7387]: I0308 03:24:29.584525 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 03:24:29.584673 master-0 kubenswrapper[7387]: I0308 03:24:29.584619 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 03:24:29.584673 master-0 kubenswrapper[7387]: I0308 03:24:29.584659 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 03:24:29.584767 master-0 kubenswrapper[7387]: I0308 03:24:29.584624 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-da0kci31im4hq" Mar 08 03:24:29.586101 master-0 kubenswrapper[7387]: I0308 03:24:29.586064 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 03:24:29.600743 master-0 kubenswrapper[7387]: I0308 03:24:29.600699 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:29.600743 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:29.600743 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:29.600743 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:29.600985 master-0 kubenswrapper[7387]: I0308 03:24:29.600763 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:29.605986 master-0 kubenswrapper[7387]: I0308 03:24:29.605926 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:24:29.642750 master-0 kubenswrapper[7387]: I0308 03:24:29.642697 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.642964 master-0 kubenswrapper[7387]: I0308 03:24:29.642765 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.643072 master-0 kubenswrapper[7387]: I0308 03:24:29.643002 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.643072 master-0 kubenswrapper[7387]: I0308 03:24:29.643058 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.643267 master-0 kubenswrapper[7387]: I0308 03:24:29.643218 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.643349 master-0 kubenswrapper[7387]: I0308 03:24:29.643322 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.643439 master-0 kubenswrapper[7387]: I0308 03:24:29.643417 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.706330 master-0 kubenswrapper[7387]: I0308 03:24:29.706271 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" event={"ID":"2ffe00fd-6834-4a5b-8b0b-b467d284f23c","Type":"ContainerStarted","Data":"19628e3953bcac1a3def2c19f2e776979687c91d7a473cf2c8903b252ed2f487"} Mar 08 03:24:29.706330 master-0 kubenswrapper[7387]: I0308 03:24:29.706342 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" event={"ID":"2ffe00fd-6834-4a5b-8b0b-b467d284f23c","Type":"ContainerStarted","Data":"b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553"} Mar 08 03:24:29.744888 master-0 kubenswrapper[7387]: I0308 03:24:29.744830 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745080 master-0 kubenswrapper[7387]: I0308 03:24:29.744929 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745080 master-0 kubenswrapper[7387]: I0308 03:24:29.744975 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745202 master-0 kubenswrapper[7387]: I0308 03:24:29.745159 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745359 master-0 kubenswrapper[7387]: I0308 03:24:29.745331 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745456 master-0 kubenswrapper[7387]: I0308 03:24:29.745392 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.745812 master-0 kubenswrapper[7387]: I0308 03:24:29.745781 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.746062 master-0 kubenswrapper[7387]: I0308 03:24:29.746036 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.746647 master-0 kubenswrapper[7387]: I0308 03:24:29.746606 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.746872 master-0 kubenswrapper[7387]: I0308 03:24:29.746845 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.748617 master-0 kubenswrapper[7387]: I0308 03:24:29.748492 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.749278 master-0 kubenswrapper[7387]: I0308 03:24:29.749224 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.763181 master-0 kubenswrapper[7387]: I0308 03:24:29.763140 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.764512 master-0 kubenswrapper[7387]: I0308 03:24:29.764470 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:29.916959 master-0 kubenswrapper[7387]: I0308 03:24:29.916861 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:30.314948 master-0 kubenswrapper[7387]: I0308 03:24:30.314885 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:24:30.334188 master-0 kubenswrapper[7387]: W0308 03:24:30.334087 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e82d678_b5bb_4aec_9b5d_435305e8bdc2.slice/crio-005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d WatchSource:0}: Error finding container 005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d: Status 404 returned error can't find the container with id 005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d Mar 08 03:24:30.598260 master-0 kubenswrapper[7387]: I0308 03:24:30.598200 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:30.598260 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:30.598260 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:30.598260 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:30.598505 master-0 kubenswrapper[7387]: I0308 03:24:30.598296 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:30.718953 master-0 kubenswrapper[7387]: I0308 03:24:30.718665 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" event={"ID":"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1","Type":"ContainerStarted","Data":"5e0d23dd0795193b739dd755af2d687f1f515037eed329d3b9e2596afb9060ee"} Mar 08 03:24:30.718953 master-0 kubenswrapper[7387]: I0308 03:24:30.718756 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" event={"ID":"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1","Type":"ContainerStarted","Data":"e989039f83eff54b6810112ca01a8eb419324c68cbc151c8d0ce8792d7613d26"} Mar 08 03:24:30.720785 master-0 kubenswrapper[7387]: I0308 03:24:30.720736 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" event={"ID":"1e82d678-b5bb-4aec-9b5d-435305e8bdc2","Type":"ContainerStarted","Data":"005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d"} Mar 08 03:24:30.744762 master-0 kubenswrapper[7387]: I0308 03:24:30.744656 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" podStartSLOduration=34.5858303 podStartE2EDuration="36.744631211s" podCreationTimestamp="2026-03-08 03:23:54 +0000 UTC" firstStartedPulling="2026-03-08 03:24:27.663230319 +0000 UTC m=+804.057706020" lastFinishedPulling="2026-03-08 03:24:29.82203125 +0000 UTC m=+806.216506931" observedRunningTime="2026-03-08 03:24:30.74036939 +0000 UTC m=+807.134845081" watchObservedRunningTime="2026-03-08 03:24:30.744631211 +0000 UTC m=+807.139106922" Mar 08 03:24:31.066702 master-0 kubenswrapper[7387]: I0308 03:24:31.066640 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:31.073642 master-0 kubenswrapper[7387]: I0308 03:24:31.073598 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:31.272542 master-0 kubenswrapper[7387]: I0308 03:24:31.272491 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-s25xz" Mar 08 03:24:31.280062 master-0 kubenswrapper[7387]: I0308 03:24:31.280008 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:24:31.598728 master-0 kubenswrapper[7387]: I0308 03:24:31.598671 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:31.598728 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:31.598728 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:31.598728 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:31.599099 master-0 kubenswrapper[7387]: I0308 03:24:31.598754 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:31.728739 master-0 kubenswrapper[7387]: I0308 03:24:31.728684 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" event={"ID":"2ffe00fd-6834-4a5b-8b0b-b467d284f23c","Type":"ContainerStarted","Data":"2858485e79b00900bd163b6f7b2d0d61e9d6beabaa41767ec01d73da348ed50d"} Mar 08 03:24:31.747325 master-0 kubenswrapper[7387]: I0308 03:24:31.747234 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" podStartSLOduration=33.722199613 podStartE2EDuration="35.747215101s" podCreationTimestamp="2026-03-08 03:23:56 +0000 UTC" firstStartedPulling="2026-03-08 03:24:29.066256122 +0000 UTC m=+805.460731803" lastFinishedPulling="2026-03-08 03:24:31.09127161 +0000 UTC m=+807.485747291" observedRunningTime="2026-03-08 03:24:31.74293975 +0000 UTC m=+808.137415471" watchObservedRunningTime="2026-03-08 03:24:31.747215101 +0000 UTC m=+808.141690792" Mar 08 03:24:32.598956 master-0 kubenswrapper[7387]: I0308 03:24:32.598878 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:32.598956 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:32.598956 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:32.598956 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:32.599252 master-0 kubenswrapper[7387]: I0308 03:24:32.598968 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:33.225245 master-0 kubenswrapper[7387]: I0308 03:24:33.225193 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:24:33.225758 master-0 kubenswrapper[7387]: I0308 03:24:33.225404 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" containerID="cri-o://41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8" gracePeriod=30 Mar 08 03:24:33.243748 master-0 kubenswrapper[7387]: I0308 03:24:33.243239 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:24:33.243748 master-0 kubenswrapper[7387]: I0308 03:24:33.243502 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" containerID="cri-o://ba06595e6a5f3ba16e78e9f249cd73ba267f2f907f5c29c1de1760f3a56ccdd7" gracePeriod=30 Mar 08 03:24:33.598580 master-0 kubenswrapper[7387]: I0308 03:24:33.598503 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:33.598580 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:33.598580 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:33.598580 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:33.598858 master-0 kubenswrapper[7387]: I0308 03:24:33.598588 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:34.403107 master-0 kubenswrapper[7387]: I0308 03:24:34.403055 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: I0308 03:24:34.448293 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: E0308 03:24:34.448539 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: I0308 03:24:34.448551 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: E0308 03:24:34.448572 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: I0308 03:24:34.448581 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: I0308 03:24:34.448675 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.450295 master-0 kubenswrapper[7387]: I0308 03:24:34.449057 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.453562 master-0 kubenswrapper[7387]: I0308 03:24:34.452944 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h4sjt" Mar 08 03:24:34.473658 master-0 kubenswrapper[7387]: I0308 03:24:34.472033 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:24:34.529516 master-0 kubenswrapper[7387]: I0308 03:24:34.529383 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert\") pod \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " Mar 08 03:24:34.529516 master-0 kubenswrapper[7387]: I0308 03:24:34.529504 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles\") pod \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " Mar 08 03:24:34.529729 master-0 kubenswrapper[7387]: I0308 03:24:34.529544 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca\") pod \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " Mar 08 03:24:34.529729 master-0 kubenswrapper[7387]: I0308 03:24:34.529629 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4np7\" (UniqueName: \"kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7\") pod \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " Mar 08 03:24:34.529729 master-0 kubenswrapper[7387]: I0308 03:24:34.529659 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config\") pod \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\" (UID: \"dd1c09ba-b44c-446a-abe0-53ac3e910a77\") " Mar 08 03:24:34.529894 master-0 kubenswrapper[7387]: I0308 03:24:34.529875 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.530043 master-0 kubenswrapper[7387]: I0308 03:24:34.530004 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.530043 master-0 kubenswrapper[7387]: I0308 03:24:34.530032 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.530115 master-0 kubenswrapper[7387]: I0308 03:24:34.530060 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.530162 master-0 kubenswrapper[7387]: I0308 03:24:34.530141 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.531522 master-0 kubenswrapper[7387]: I0308 03:24:34.531481 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config" (OuterVolumeSpecName: "config") pod "dd1c09ba-b44c-446a-abe0-53ac3e910a77" (UID: "dd1c09ba-b44c-446a-abe0-53ac3e910a77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:34.532000 master-0 kubenswrapper[7387]: I0308 03:24:34.531958 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd1c09ba-b44c-446a-abe0-53ac3e910a77" (UID: "dd1c09ba-b44c-446a-abe0-53ac3e910a77"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:34.546469 master-0 kubenswrapper[7387]: I0308 03:24:34.546413 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd1c09ba-b44c-446a-abe0-53ac3e910a77" (UID: "dd1c09ba-b44c-446a-abe0-53ac3e910a77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:24:34.551248 master-0 kubenswrapper[7387]: I0308 03:24:34.550174 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd1c09ba-b44c-446a-abe0-53ac3e910a77" (UID: "dd1c09ba-b44c-446a-abe0-53ac3e910a77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:34.561916 master-0 kubenswrapper[7387]: I0308 03:24:34.559074 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7" (OuterVolumeSpecName: "kube-api-access-g4np7") pod "dd1c09ba-b44c-446a-abe0-53ac3e910a77" (UID: "dd1c09ba-b44c-446a-abe0-53ac3e910a77"). InnerVolumeSpecName "kube-api-access-g4np7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:24:34.608637 master-0 kubenswrapper[7387]: I0308 03:24:34.608579 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:34.608637 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:34.608637 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:34.608637 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:34.609053 master-0 kubenswrapper[7387]: I0308 03:24:34.608973 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:34.631769 master-0 kubenswrapper[7387]: I0308 03:24:34.631701 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.631769 master-0 kubenswrapper[7387]: I0308 03:24:34.631769 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.631880 master-0 kubenswrapper[7387]: I0308 03:24:34.631791 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.631880 master-0 kubenswrapper[7387]: I0308 03:24:34.631813 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.631880 master-0 kubenswrapper[7387]: I0308 03:24:34.631859 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.631984 master-0 kubenswrapper[7387]: I0308 03:24:34.631961 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1c09ba-b44c-446a-abe0-53ac3e910a77-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:34.631984 master-0 kubenswrapper[7387]: I0308 03:24:34.631972 7387 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:34.631984 master-0 kubenswrapper[7387]: I0308 03:24:34.631982 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:34.632071 master-0 kubenswrapper[7387]: I0308 03:24:34.631993 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4np7\" (UniqueName: \"kubernetes.io/projected/dd1c09ba-b44c-446a-abe0-53ac3e910a77-kube-api-access-g4np7\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:34.632071 master-0 kubenswrapper[7387]: I0308 03:24:34.632004 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1c09ba-b44c-446a-abe0-53ac3e910a77-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:34.633239 master-0 kubenswrapper[7387]: I0308 03:24:34.633201 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.641160 master-0 kubenswrapper[7387]: I0308 03:24:34.637515 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.642299 master-0 kubenswrapper[7387]: I0308 03:24:34.642253 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.643255 master-0 kubenswrapper[7387]: I0308 03:24:34.643225 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.656849 master-0 kubenswrapper[7387]: I0308 03:24:34.656804 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.671610 master-0 kubenswrapper[7387]: I0308 03:24:34.671567 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9cmmj"] Mar 08 03:24:34.671941 master-0 kubenswrapper[7387]: I0308 03:24:34.671917 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerName="controller-manager" Mar 08 03:24:34.672516 master-0 kubenswrapper[7387]: I0308 03:24:34.672289 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.679606 master-0 kubenswrapper[7387]: I0308 03:24:34.676133 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-w9pqc" Mar 08 03:24:34.679606 master-0 kubenswrapper[7387]: I0308 03:24:34.676342 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 08 03:24:34.732703 master-0 kubenswrapper[7387]: I0308 03:24:34.732667 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqddd\" (UniqueName: \"kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.732789 master-0 kubenswrapper[7387]: I0308 03:24:34.732712 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.732789 master-0 kubenswrapper[7387]: I0308 03:24:34.732785 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.732855 master-0 kubenswrapper[7387]: I0308 03:24:34.732803 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.822961 master-0 kubenswrapper[7387]: I0308 03:24:34.820658 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8c4996cd4-qsvqj_e2495994-736c-4916-b210-ff5633f3387d/route-controller-manager/1.log" Mar 08 03:24:34.822961 master-0 kubenswrapper[7387]: I0308 03:24:34.820709 7387 generic.go:334] "Generic (PLEG): container finished" podID="e2495994-736c-4916-b210-ff5633f3387d" containerID="ba06595e6a5f3ba16e78e9f249cd73ba267f2f907f5c29c1de1760f3a56ccdd7" exitCode=0 Mar 08 03:24:34.822961 master-0 kubenswrapper[7387]: I0308 03:24:34.820757 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerDied","Data":"ba06595e6a5f3ba16e78e9f249cd73ba267f2f907f5c29c1de1760f3a56ccdd7"} Mar 08 03:24:34.822961 master-0 kubenswrapper[7387]: I0308 03:24:34.820792 7387 scope.go:117] "RemoveContainer" containerID="d6083de08fa8a9f86a3a4636376820118e5d2c03d8b520f0635e9d2361ef8efe" Mar 08 03:24:34.824967 master-0 kubenswrapper[7387]: I0308 03:24:34.824456 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" event={"ID":"1e82d678-b5bb-4aec-9b5d-435305e8bdc2","Type":"ContainerStarted","Data":"f76a1bff6446c8bbd3a34e5b92f198922251d11d225fb45f11ae978bed808876"} Mar 08 03:24:34.829657 master-0 kubenswrapper[7387]: I0308 03:24:34.829214 7387 generic.go:334] "Generic (PLEG): container finished" podID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" containerID="41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8" exitCode=0 Mar 08 03:24:34.829657 master-0 kubenswrapper[7387]: I0308 03:24:34.829307 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerDied","Data":"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8"} Mar 08 03:24:34.829657 master-0 kubenswrapper[7387]: I0308 03:24:34.829334 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" event={"ID":"dd1c09ba-b44c-446a-abe0-53ac3e910a77","Type":"ContainerDied","Data":"187df35e7836b813c131539b8b3d9d53cf0016c310d2d5141489db5ae6ac75e3"} Mar 08 03:24:34.829657 master-0 kubenswrapper[7387]: I0308 03:24:34.829375 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv" Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.830939 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" event={"ID":"38287d1a-b784-4ce9-9650-949d92469519","Type":"ContainerStarted","Data":"6b77d1aa000b2558b6a9776f674b09199ce16f8961114d6e6a7d2e0422bd739b"} Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.833373 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.833400 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.833447 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqddd\" (UniqueName: \"kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.833468 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.833919 master-0 kubenswrapper[7387]: I0308 03:24:34.833568 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.838916 master-0 kubenswrapper[7387]: I0308 03:24:34.834212 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.838916 master-0 kubenswrapper[7387]: I0308 03:24:34.834438 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.859933 master-0 kubenswrapper[7387]: I0308 03:24:34.856324 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" podStartSLOduration=1.7868088530000001 podStartE2EDuration="5.856309406s" podCreationTimestamp="2026-03-08 03:24:29 +0000 UTC" firstStartedPulling="2026-03-08 03:24:30.335846252 +0000 UTC m=+806.730321933" lastFinishedPulling="2026-03-08 03:24:34.405346805 +0000 UTC m=+810.799822486" observedRunningTime="2026-03-08 03:24:34.855205578 +0000 UTC m=+811.249681259" watchObservedRunningTime="2026-03-08 03:24:34.856309406 +0000 UTC m=+811.250785087" Mar 08 03:24:34.859933 master-0 kubenswrapper[7387]: I0308 03:24:34.859167 7387 scope.go:117] "RemoveContainer" containerID="41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8" Mar 08 03:24:34.864998 master-0 kubenswrapper[7387]: I0308 03:24:34.864502 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqddd\" (UniqueName: \"kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd\") pod \"cni-sysctl-allowlist-ds-9cmmj\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:34.892449 master-0 kubenswrapper[7387]: I0308 03:24:34.883761 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:24:34.892449 master-0 kubenswrapper[7387]: I0308 03:24:34.889205 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77c5c9d7dd-xtftv"] Mar 08 03:24:34.902051 master-0 kubenswrapper[7387]: I0308 03:24:34.897389 7387 scope.go:117] "RemoveContainer" containerID="101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc" Mar 08 03:24:34.909812 master-0 kubenswrapper[7387]: I0308 03:24:34.909375 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" podStartSLOduration=34.24105653 podStartE2EDuration="41.909355801s" podCreationTimestamp="2026-03-08 03:23:53 +0000 UTC" firstStartedPulling="2026-03-08 03:24:26.774294416 +0000 UTC m=+803.168770097" lastFinishedPulling="2026-03-08 03:24:34.442593687 +0000 UTC m=+810.837069368" observedRunningTime="2026-03-08 03:24:34.908590051 +0000 UTC m=+811.303065732" watchObservedRunningTime="2026-03-08 03:24:34.909355801 +0000 UTC m=+811.303831482" Mar 08 03:24:34.915289 master-0 kubenswrapper[7387]: I0308 03:24:34.915233 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7"] Mar 08 03:24:34.950259 master-0 kubenswrapper[7387]: I0308 03:24:34.950189 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:34.953787 master-0 kubenswrapper[7387]: I0308 03:24:34.953737 7387 scope.go:117] "RemoveContainer" containerID="41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8" Mar 08 03:24:34.961231 master-0 kubenswrapper[7387]: E0308 03:24:34.961175 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8\": container with ID starting with 41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8 not found: ID does not exist" containerID="41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8" Mar 08 03:24:34.961296 master-0 kubenswrapper[7387]: I0308 03:24:34.961238 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8"} err="failed to get container status \"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8\": rpc error: code = NotFound desc = could not find container \"41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8\": container with ID starting with 41ca15e4a6ad4847ca08ee2bbd0a8d8131abfdfd04eefcf604b4ac8e41fe27f8 not found: ID does not exist" Mar 08 03:24:34.961296 master-0 kubenswrapper[7387]: I0308 03:24:34.961271 7387 scope.go:117] "RemoveContainer" containerID="101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc" Mar 08 03:24:34.962230 master-0 kubenswrapper[7387]: E0308 03:24:34.962188 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc\": container with ID starting with 101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc not found: ID does not exist" containerID="101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc" Mar 08 03:24:34.962295 master-0 kubenswrapper[7387]: I0308 03:24:34.962228 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc"} err="failed to get container status \"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc\": rpc error: code = NotFound desc = could not find container \"101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc\": container with ID starting with 101ea5b74925f2629d3b673abc3df2646b2971ed5a02a4d33f72d8e0bafc02dc not found: ID does not exist" Mar 08 03:24:34.967551 master-0 kubenswrapper[7387]: I0308 03:24:34.967517 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:24:34.995556 master-0 kubenswrapper[7387]: I0308 03:24:34.995504 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:35.038101 master-0 kubenswrapper[7387]: I0308 03:24:35.037105 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv4f8\" (UniqueName: \"kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8\") pod \"e2495994-736c-4916-b210-ff5633f3387d\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " Mar 08 03:24:35.038101 master-0 kubenswrapper[7387]: I0308 03:24:35.037793 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config" (OuterVolumeSpecName: "config") pod "e2495994-736c-4916-b210-ff5633f3387d" (UID: "e2495994-736c-4916-b210-ff5633f3387d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:35.038526 master-0 kubenswrapper[7387]: I0308 03:24:35.038346 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config\") pod \"e2495994-736c-4916-b210-ff5633f3387d\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " Mar 08 03:24:35.039701 master-0 kubenswrapper[7387]: I0308 03:24:35.039237 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert\") pod \"e2495994-736c-4916-b210-ff5633f3387d\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " Mar 08 03:24:35.039701 master-0 kubenswrapper[7387]: I0308 03:24:35.039350 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca\") pod \"e2495994-736c-4916-b210-ff5633f3387d\" (UID: \"e2495994-736c-4916-b210-ff5633f3387d\") " Mar 08 03:24:35.040626 master-0 kubenswrapper[7387]: I0308 03:24:35.040312 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:35.041419 master-0 kubenswrapper[7387]: I0308 03:24:35.041342 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e2495994-736c-4916-b210-ff5633f3387d" (UID: "e2495994-736c-4916-b210-ff5633f3387d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:24:35.044176 master-0 kubenswrapper[7387]: I0308 03:24:35.044134 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e2495994-736c-4916-b210-ff5633f3387d" (UID: "e2495994-736c-4916-b210-ff5633f3387d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:24:35.051111 master-0 kubenswrapper[7387]: I0308 03:24:35.051066 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8" (OuterVolumeSpecName: "kube-api-access-qv4f8") pod "e2495994-736c-4916-b210-ff5633f3387d" (UID: "e2495994-736c-4916-b210-ff5633f3387d"). InnerVolumeSpecName "kube-api-access-qv4f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:24:35.145846 master-0 kubenswrapper[7387]: I0308 03:24:35.145730 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv4f8\" (UniqueName: \"kubernetes.io/projected/e2495994-736c-4916-b210-ff5633f3387d-kube-api-access-qv4f8\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:35.145846 master-0 kubenswrapper[7387]: I0308 03:24:35.145763 7387 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2495994-736c-4916-b210-ff5633f3387d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:35.145846 master-0 kubenswrapper[7387]: I0308 03:24:35.145773 7387 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e2495994-736c-4916-b210-ff5633f3387d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:24:35.375462 master-0 kubenswrapper[7387]: I0308 03:24:35.375411 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:24:35.382684 master-0 kubenswrapper[7387]: W0308 03:24:35.382631 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd53c98b_51cc_498a_ab37_f743a27bdcfb.slice/crio-846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1 WatchSource:0}: Error finding container 846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1: Status 404 returned error can't find the container with id 846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1 Mar 08 03:24:35.603198 master-0 kubenswrapper[7387]: I0308 03:24:35.603125 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:35.603198 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:35.603198 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:35.603198 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:35.603198 master-0 kubenswrapper[7387]: I0308 03:24:35.603190 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:35.774295 master-0 kubenswrapper[7387]: I0308 03:24:35.774258 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1c09ba-b44c-446a-abe0-53ac3e910a77" path="/var/lib/kubelet/pods/dd1c09ba-b44c-446a-abe0-53ac3e910a77/volumes" Mar 08 03:24:35.844926 master-0 kubenswrapper[7387]: I0308 03:24:35.843413 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerStarted","Data":"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6"} Mar 08 03:24:35.844926 master-0 kubenswrapper[7387]: I0308 03:24:35.843479 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerStarted","Data":"846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1"} Mar 08 03:24:35.844926 master-0 kubenswrapper[7387]: I0308 03:24:35.843730 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:35.848727 master-0 kubenswrapper[7387]: I0308 03:24:35.848697 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" event={"ID":"645d8c66-50e1-4e0e-ae02-5a766526652e","Type":"ContainerStarted","Data":"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5"} Mar 08 03:24:35.848817 master-0 kubenswrapper[7387]: I0308 03:24:35.848803 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" event={"ID":"645d8c66-50e1-4e0e-ae02-5a766526652e","Type":"ContainerStarted","Data":"12c51f44e28e5558cd4bdffa4e53ad4825db01b2ba98d6f7f708ff6d84be0671"} Mar 08 03:24:35.849497 master-0 kubenswrapper[7387]: I0308 03:24:35.849475 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:35.851088 master-0 kubenswrapper[7387]: I0308 03:24:35.851072 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" event={"ID":"e2495994-736c-4916-b210-ff5633f3387d","Type":"ContainerDied","Data":"0f031beb71b55f3d5cf502aa52b29fda44b26c543b17c2ed8446cc613eb9a37c"} Mar 08 03:24:35.851174 master-0 kubenswrapper[7387]: I0308 03:24:35.851162 7387 scope.go:117] "RemoveContainer" containerID="ba06595e6a5f3ba16e78e9f249cd73ba267f2f907f5c29c1de1760f3a56ccdd7" Mar 08 03:24:35.852633 master-0 kubenswrapper[7387]: I0308 03:24:35.851321 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj" Mar 08 03:24:35.853011 master-0 kubenswrapper[7387]: I0308 03:24:35.852927 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:24:35.870413 master-0 kubenswrapper[7387]: I0308 03:24:35.869686 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" event={"ID":"8c65557b-9566-49f1-a049-fe492ca201b5","Type":"ContainerStarted","Data":"78dfc0f3409bfb2a50f2c5b38cd840831d1fcd048f9d1385cd7bd8ea527a7889"} Mar 08 03:24:35.870413 master-0 kubenswrapper[7387]: I0308 03:24:35.869767 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" event={"ID":"8c65557b-9566-49f1-a049-fe492ca201b5","Type":"ContainerStarted","Data":"1a7085411bd9650b06b777535c32a51b5f0829889be0498544a2a5320ab65b31"} Mar 08 03:24:35.873273 master-0 kubenswrapper[7387]: I0308 03:24:35.873213 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" podStartSLOduration=2.87319212 podStartE2EDuration="2.87319212s" podCreationTimestamp="2026-03-08 03:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:35.868545379 +0000 UTC m=+812.263021060" watchObservedRunningTime="2026-03-08 03:24:35.87319212 +0000 UTC m=+812.267667811" Mar 08 03:24:35.944716 master-0 kubenswrapper[7387]: I0308 03:24:35.944627 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" podStartSLOduration=1.944609024 podStartE2EDuration="1.944609024s" podCreationTimestamp="2026-03-08 03:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:35.940302702 +0000 UTC m=+812.334778383" watchObservedRunningTime="2026-03-08 03:24:35.944609024 +0000 UTC m=+812.339084705" Mar 08 03:24:35.963986 master-0 kubenswrapper[7387]: I0308 03:24:35.959787 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:24:35.963986 master-0 kubenswrapper[7387]: I0308 03:24:35.963540 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8c4996cd4-qsvqj"] Mar 08 03:24:36.598726 master-0 kubenswrapper[7387]: I0308 03:24:36.598652 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:36.598726 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:36.598726 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:36.598726 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:36.599190 master-0 kubenswrapper[7387]: I0308 03:24:36.598731 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:36.920602 master-0 kubenswrapper[7387]: I0308 03:24:36.920506 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:24:37.241922 master-0 kubenswrapper[7387]: I0308 03:24:37.241783 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:24:37.242171 master-0 kubenswrapper[7387]: E0308 03:24:37.242087 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242171 master-0 kubenswrapper[7387]: I0308 03:24:37.242103 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242171 master-0 kubenswrapper[7387]: E0308 03:24:37.242134 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242171 master-0 kubenswrapper[7387]: I0308 03:24:37.242139 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242332 master-0 kubenswrapper[7387]: I0308 03:24:37.242240 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242332 master-0 kubenswrapper[7387]: I0308 03:24:37.242257 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242332 master-0 kubenswrapper[7387]: I0308 03:24:37.242271 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:37.242678 master-0 kubenswrapper[7387]: I0308 03:24:37.242647 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.248341 master-0 kubenswrapper[7387]: I0308 03:24:37.248293 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:24:37.248488 master-0 kubenswrapper[7387]: I0308 03:24:37.248301 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:24:37.248550 master-0 kubenswrapper[7387]: I0308 03:24:37.248514 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fvhvd" Mar 08 03:24:37.248686 master-0 kubenswrapper[7387]: I0308 03:24:37.248658 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:24:37.248893 master-0 kubenswrapper[7387]: I0308 03:24:37.248852 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:24:37.249724 master-0 kubenswrapper[7387]: I0308 03:24:37.249684 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:24:37.252685 master-0 kubenswrapper[7387]: I0308 03:24:37.251815 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:24:37.289805 master-0 kubenswrapper[7387]: I0308 03:24:37.289742 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.289805 master-0 kubenswrapper[7387]: I0308 03:24:37.289801 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.290072 master-0 kubenswrapper[7387]: I0308 03:24:37.289853 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.290072 master-0 kubenswrapper[7387]: I0308 03:24:37.289884 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.391696 master-0 kubenswrapper[7387]: I0308 03:24:37.391631 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.391696 master-0 kubenswrapper[7387]: I0308 03:24:37.391686 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.391924 master-0 kubenswrapper[7387]: I0308 03:24:37.391746 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.391924 master-0 kubenswrapper[7387]: I0308 03:24:37.391769 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.394089 master-0 kubenswrapper[7387]: I0308 03:24:37.392585 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.394089 master-0 kubenswrapper[7387]: I0308 03:24:37.393150 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.404964 master-0 kubenswrapper[7387]: I0308 03:24:37.404884 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.407479 master-0 kubenswrapper[7387]: I0308 03:24:37.407453 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.575727 master-0 kubenswrapper[7387]: I0308 03:24:37.575592 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:37.599830 master-0 kubenswrapper[7387]: I0308 03:24:37.599775 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:37.599830 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:37.599830 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:37.599830 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:37.600041 master-0 kubenswrapper[7387]: I0308 03:24:37.599874 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:37.677287 master-0 kubenswrapper[7387]: I0308 03:24:37.677208 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9cmmj"] Mar 08 03:24:37.767229 master-0 kubenswrapper[7387]: I0308 03:24:37.767185 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2495994-736c-4916-b210-ff5633f3387d" path="/var/lib/kubelet/pods/e2495994-736c-4916-b210-ff5633f3387d/volumes" Mar 08 03:24:38.029064 master-0 kubenswrapper[7387]: I0308 03:24:38.029007 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:24:38.039190 master-0 kubenswrapper[7387]: W0308 03:24:38.039100 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ee8c53_bf36_4459_a2c2_380293a09e26.slice/crio-7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740 WatchSource:0}: Error finding container 7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740: Status 404 returned error can't find the container with id 7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740 Mar 08 03:24:38.599065 master-0 kubenswrapper[7387]: I0308 03:24:38.599021 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:38.599065 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:38.599065 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:38.599065 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:38.599392 master-0 kubenswrapper[7387]: I0308 03:24:38.599081 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:38.914533 master-0 kubenswrapper[7387]: I0308 03:24:38.914395 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" gracePeriod=30 Mar 08 03:24:38.915409 master-0 kubenswrapper[7387]: I0308 03:24:38.915105 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerStarted","Data":"a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7"} Mar 08 03:24:38.915409 master-0 kubenswrapper[7387]: I0308 03:24:38.915173 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerStarted","Data":"7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740"} Mar 08 03:24:38.916614 master-0 kubenswrapper[7387]: I0308 03:24:38.915655 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:38.923174 master-0 kubenswrapper[7387]: I0308 03:24:38.923122 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:24:38.948935 master-0 kubenswrapper[7387]: I0308 03:24:38.946613 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podStartSLOduration=5.946589583 podStartE2EDuration="5.946589583s" podCreationTimestamp="2026-03-08 03:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:38.941593843 +0000 UTC m=+815.336069554" watchObservedRunningTime="2026-03-08 03:24:38.946589583 +0000 UTC m=+815.341065264" Mar 08 03:24:39.598661 master-0 kubenswrapper[7387]: I0308 03:24:39.598615 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:39.598661 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:39.598661 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:39.598661 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:39.599194 master-0 kubenswrapper[7387]: I0308 03:24:39.598664 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:40.598647 master-0 kubenswrapper[7387]: I0308 03:24:40.598595 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:40.598647 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:40.598647 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:40.598647 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:40.599285 master-0 kubenswrapper[7387]: I0308 03:24:40.598672 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:41.600089 master-0 kubenswrapper[7387]: I0308 03:24:41.600044 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:41.600089 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:41.600089 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:41.600089 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:41.600650 master-0 kubenswrapper[7387]: I0308 03:24:41.600112 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:42.599607 master-0 kubenswrapper[7387]: I0308 03:24:42.599532 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:42.599607 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:42.599607 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:42.599607 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:42.600055 master-0 kubenswrapper[7387]: I0308 03:24:42.599644 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:42.952780 master-0 kubenswrapper[7387]: I0308 03:24:42.952588 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" event={"ID":"8c65557b-9566-49f1-a049-fe492ca201b5","Type":"ContainerStarted","Data":"a06749d70fe898a009e67138a8c24210d9e9c5e2f8da6592f0e5a82371873c57"} Mar 08 03:24:42.987721 master-0 kubenswrapper[7387]: I0308 03:24:42.987582 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" podStartSLOduration=36.847679095 podStartE2EDuration="43.987551553s" podCreationTimestamp="2026-03-08 03:23:59 +0000 UTC" firstStartedPulling="2026-03-08 03:24:35.081346441 +0000 UTC m=+811.475822132" lastFinishedPulling="2026-03-08 03:24:42.221218879 +0000 UTC m=+818.615694590" observedRunningTime="2026-03-08 03:24:42.987039559 +0000 UTC m=+819.381515310" watchObservedRunningTime="2026-03-08 03:24:42.987551553 +0000 UTC m=+819.382027274" Mar 08 03:24:43.512135 master-0 kubenswrapper[7387]: I0308 03:24:43.512078 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 03:24:43.512349 master-0 kubenswrapper[7387]: E0308 03:24:43.512329 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:43.512349 master-0 kubenswrapper[7387]: I0308 03:24:43.512346 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2495994-736c-4916-b210-ff5633f3387d" containerName="route-controller-manager" Mar 08 03:24:43.513856 master-0 kubenswrapper[7387]: I0308 03:24:43.513825 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.516155 master-0 kubenswrapper[7387]: I0308 03:24:43.516101 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 03:24:43.520330 master-0 kubenswrapper[7387]: I0308 03:24:43.520282 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-mggqh" Mar 08 03:24:43.532004 master-0 kubenswrapper[7387]: I0308 03:24:43.531937 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 03:24:43.590802 master-0 kubenswrapper[7387]: I0308 03:24:43.590743 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.591000 master-0 kubenswrapper[7387]: I0308 03:24:43.590826 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.591000 master-0 kubenswrapper[7387]: I0308 03:24:43.590882 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.602633 master-0 kubenswrapper[7387]: I0308 03:24:43.602589 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:43.602633 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:43.602633 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:43.602633 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:43.602770 master-0 kubenswrapper[7387]: I0308 03:24:43.602665 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:43.692032 master-0 kubenswrapper[7387]: I0308 03:24:43.691979 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.692239 master-0 kubenswrapper[7387]: I0308 03:24:43.692181 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.692344 master-0 kubenswrapper[7387]: I0308 03:24:43.692294 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.692406 master-0 kubenswrapper[7387]: I0308 03:24:43.692350 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.692406 master-0 kubenswrapper[7387]: I0308 03:24:43.692326 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.710321 master-0 kubenswrapper[7387]: I0308 03:24:43.710270 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:43.900761 master-0 kubenswrapper[7387]: I0308 03:24:43.900600 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:24:44.603858 master-0 kubenswrapper[7387]: I0308 03:24:44.603785 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:44.603858 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:44.603858 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:44.603858 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:44.604631 master-0 kubenswrapper[7387]: I0308 03:24:44.603883 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:44.617152 master-0 kubenswrapper[7387]: W0308 03:24:44.617083 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podddf7d93b_6a73_4de5_b984_cde6fba07b48.slice/crio-32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb WatchSource:0}: Error finding container 32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb: Status 404 returned error can't find the container with id 32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb Mar 08 03:24:44.622785 master-0 kubenswrapper[7387]: I0308 03:24:44.622742 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-lxr7s"] Mar 08 03:24:44.624521 master-0 kubenswrapper[7387]: I0308 03:24:44.624488 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.626801 master-0 kubenswrapper[7387]: I0308 03:24:44.626752 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-pz6cl" Mar 08 03:24:44.631720 master-0 kubenswrapper[7387]: I0308 03:24:44.629559 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 08 03:24:44.710684 master-0 kubenswrapper[7387]: I0308 03:24:44.710619 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29sr\" (UniqueName: \"kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.710853 master-0 kubenswrapper[7387]: I0308 03:24:44.710722 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.812078 master-0 kubenswrapper[7387]: I0308 03:24:44.812006 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29sr\" (UniqueName: \"kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.812286 master-0 kubenswrapper[7387]: I0308 03:24:44.812257 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.815711 master-0 kubenswrapper[7387]: I0308 03:24:44.815650 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:44.969060 master-0 kubenswrapper[7387]: I0308 03:24:44.968966 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ddf7d93b-6a73-4de5-b984-cde6fba07b48","Type":"ContainerStarted","Data":"32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb"} Mar 08 03:24:44.998442 master-0 kubenswrapper[7387]: E0308 03:24:44.998340 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:44.999996 master-0 kubenswrapper[7387]: E0308 03:24:44.999873 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:45.001731 master-0 kubenswrapper[7387]: E0308 03:24:45.001621 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:45.001731 master-0 kubenswrapper[7387]: E0308 03:24:45.001719 7387 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:24:45.066959 master-0 kubenswrapper[7387]: I0308 03:24:45.066841 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-lxr7s"] Mar 08 03:24:45.082128 master-0 kubenswrapper[7387]: I0308 03:24:45.082057 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29sr\" (UniqueName: \"kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:45.303124 master-0 kubenswrapper[7387]: I0308 03:24:45.303038 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:24:45.599096 master-0 kubenswrapper[7387]: I0308 03:24:45.598890 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:45.599096 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:45.599096 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:45.599096 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:45.599096 master-0 kubenswrapper[7387]: I0308 03:24:45.598985 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:45.979364 master-0 kubenswrapper[7387]: I0308 03:24:45.979196 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ddf7d93b-6a73-4de5-b984-cde6fba07b48","Type":"ContainerStarted","Data":"48906d4a9827177a4feca5f421bb263deddb2a2e07e0343746350be07efd8684"} Mar 08 03:24:46.191108 master-0 kubenswrapper[7387]: I0308 03:24:46.189574 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-lxr7s"] Mar 08 03:24:46.194514 master-0 kubenswrapper[7387]: I0308 03:24:46.193247 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.193228579 podStartE2EDuration="3.193228579s" podCreationTimestamp="2026-03-08 03:24:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:46.154373784 +0000 UTC m=+822.548849545" watchObservedRunningTime="2026-03-08 03:24:46.193228579 +0000 UTC m=+822.587704270" Mar 08 03:24:46.407134 master-0 kubenswrapper[7387]: I0308 03:24:46.407063 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:24:46.408197 master-0 kubenswrapper[7387]: I0308 03:24:46.408158 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.410137 master-0 kubenswrapper[7387]: I0308 03:24:46.410097 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sglg6" Mar 08 03:24:46.411515 master-0 kubenswrapper[7387]: I0308 03:24:46.411472 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:24:46.470683 master-0 kubenswrapper[7387]: I0308 03:24:46.470606 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.470950 master-0 kubenswrapper[7387]: I0308 03:24:46.470689 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.470950 master-0 kubenswrapper[7387]: I0308 03:24:46.470749 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.477164 master-0 kubenswrapper[7387]: I0308 03:24:46.477092 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:24:46.571811 master-0 kubenswrapper[7387]: I0308 03:24:46.571756 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.571811 master-0 kubenswrapper[7387]: I0308 03:24:46.571809 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.572086 master-0 kubenswrapper[7387]: I0308 03:24:46.571846 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.572086 master-0 kubenswrapper[7387]: I0308 03:24:46.571940 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.572418 master-0 kubenswrapper[7387]: I0308 03:24:46.572387 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.598935 master-0 kubenswrapper[7387]: I0308 03:24:46.598875 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.601115 master-0 kubenswrapper[7387]: I0308 03:24:46.601080 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:46.601115 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:46.601115 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:46.601115 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:46.601303 master-0 kubenswrapper[7387]: I0308 03:24:46.601134 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:46.781027 master-0 kubenswrapper[7387]: I0308 03:24:46.780961 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:24:46.990988 master-0 kubenswrapper[7387]: I0308 03:24:46.990443 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" event={"ID":"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef","Type":"ContainerStarted","Data":"540bc0c74500cbcc8195262c8e4dfa792c85d3c21e495089d649b6a7599066b8"} Mar 08 03:24:46.990988 master-0 kubenswrapper[7387]: I0308 03:24:46.990513 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" event={"ID":"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef","Type":"ContainerStarted","Data":"37f80f130113372976c8aa5e5cf51cc51fb1ba390ad43731ea3dd2338228d9c4"} Mar 08 03:24:46.990988 master-0 kubenswrapper[7387]: I0308 03:24:46.990529 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" event={"ID":"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef","Type":"ContainerStarted","Data":"8763acbe8455fad4530b6a292ec3d641368771a0e2662a77415028cd12a34859"} Mar 08 03:24:47.007882 master-0 kubenswrapper[7387]: I0308 03:24:47.007417 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" podStartSLOduration=3.007399251 podStartE2EDuration="3.007399251s" podCreationTimestamp="2026-03-08 03:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:47.002953655 +0000 UTC m=+823.397429346" watchObservedRunningTime="2026-03-08 03:24:47.007399251 +0000 UTC m=+823.401874932" Mar 08 03:24:47.046057 master-0 kubenswrapper[7387]: I0308 03:24:47.045031 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:24:47.046057 master-0 kubenswrapper[7387]: I0308 03:24:47.045404 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="kube-rbac-proxy" containerID="cri-o://4c27d8bf0fe82333d5a0263568559ac58eb59de0b0e67b1c1334b664b1330158" gracePeriod=30 Mar 08 03:24:47.046057 master-0 kubenswrapper[7387]: I0308 03:24:47.045677 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="multus-admission-controller" containerID="cri-o://d8908e02467ded566e9d23379f605a2e44df49bd48cf230c5b0b05ea8c4f6b21" gracePeriod=30 Mar 08 03:24:47.203590 master-0 kubenswrapper[7387]: I0308 03:24:47.198115 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:24:47.599455 master-0 kubenswrapper[7387]: I0308 03:24:47.599344 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:47.599455 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:47.599455 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:47.599455 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:47.599455 master-0 kubenswrapper[7387]: I0308 03:24:47.599421 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:48.000307 master-0 kubenswrapper[7387]: I0308 03:24:48.000171 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf2c720c-7700-4cdb-b9e9-9341479046d6","Type":"ContainerStarted","Data":"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f"} Mar 08 03:24:48.000885 master-0 kubenswrapper[7387]: I0308 03:24:48.000356 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf2c720c-7700-4cdb-b9e9-9341479046d6","Type":"ContainerStarted","Data":"768f1952eb5c9c4e206fc6f42ed6d3c451f1ab498187eab3b5dd94dd0db3d647"} Mar 08 03:24:48.003696 master-0 kubenswrapper[7387]: I0308 03:24:48.003613 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerID="4c27d8bf0fe82333d5a0263568559ac58eb59de0b0e67b1c1334b664b1330158" exitCode=0 Mar 08 03:24:48.003790 master-0 kubenswrapper[7387]: I0308 03:24:48.003663 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerDied","Data":"4c27d8bf0fe82333d5a0263568559ac58eb59de0b0e67b1c1334b664b1330158"} Mar 08 03:24:48.026826 master-0 kubenswrapper[7387]: I0308 03:24:48.026713 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.026681727 podStartE2EDuration="2.026681727s" podCreationTimestamp="2026-03-08 03:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:48.018833192 +0000 UTC m=+824.413308943" watchObservedRunningTime="2026-03-08 03:24:48.026681727 +0000 UTC m=+824.421157438" Mar 08 03:24:48.599780 master-0 kubenswrapper[7387]: I0308 03:24:48.599690 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:48.599780 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:48.599780 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:48.599780 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:48.599780 master-0 kubenswrapper[7387]: I0308 03:24:48.599760 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:49.599464 master-0 kubenswrapper[7387]: I0308 03:24:49.599369 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:49.599464 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:49.599464 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:49.599464 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:49.600508 master-0 kubenswrapper[7387]: I0308 03:24:49.599473 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:49.917785 master-0 kubenswrapper[7387]: I0308 03:24:49.917660 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:49.918051 master-0 kubenswrapper[7387]: I0308 03:24:49.918038 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:24:50.599422 master-0 kubenswrapper[7387]: I0308 03:24:50.599351 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:50.599422 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:50.599422 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:50.599422 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:50.600107 master-0 kubenswrapper[7387]: I0308 03:24:50.599445 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:51.599598 master-0 kubenswrapper[7387]: I0308 03:24:51.599471 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:51.599598 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:51.599598 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:51.599598 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:51.599598 master-0 kubenswrapper[7387]: I0308 03:24:51.599578 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:52.599552 master-0 kubenswrapper[7387]: I0308 03:24:52.599443 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:52.599552 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:52.599552 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:52.599552 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:52.599552 master-0 kubenswrapper[7387]: I0308 03:24:52.599529 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:52.744391 master-0 kubenswrapper[7387]: I0308 03:24:52.744296 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:24:52.744716 master-0 kubenswrapper[7387]: I0308 03:24:52.744652 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bf2c720c-7700-4cdb-b9e9-9341479046d6" containerName="installer" containerID="cri-o://4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f" gracePeriod=30 Mar 08 03:24:53.598926 master-0 kubenswrapper[7387]: I0308 03:24:53.598834 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:53.598926 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:53.598926 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:53.598926 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:53.598926 master-0 kubenswrapper[7387]: I0308 03:24:53.598915 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:54.599656 master-0 kubenswrapper[7387]: I0308 03:24:54.599594 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:54.599656 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:54.599656 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:54.599656 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:54.599656 master-0 kubenswrapper[7387]: I0308 03:24:54.599657 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:54.997759 master-0 kubenswrapper[7387]: E0308 03:24:54.997598 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:54.999052 master-0 kubenswrapper[7387]: E0308 03:24:54.999012 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:55.004254 master-0 kubenswrapper[7387]: E0308 03:24:55.004167 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:24:55.004254 master-0 kubenswrapper[7387]: E0308 03:24:55.004241 7387 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:24:55.600231 master-0 kubenswrapper[7387]: I0308 03:24:55.600176 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:55.600231 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:55.600231 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:55.600231 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:55.601309 master-0 kubenswrapper[7387]: I0308 03:24:55.601262 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:56.599539 master-0 kubenswrapper[7387]: I0308 03:24:56.599431 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:56.599539 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:56.599539 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:56.599539 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:56.599539 master-0 kubenswrapper[7387]: I0308 03:24:56.599521 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:57.341090 master-0 kubenswrapper[7387]: I0308 03:24:57.341023 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:24:57.343534 master-0 kubenswrapper[7387]: I0308 03:24:57.343496 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.357080 master-0 kubenswrapper[7387]: I0308 03:24:57.356987 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:24:57.544805 master-0 kubenswrapper[7387]: I0308 03:24:57.544754 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.545139 master-0 kubenswrapper[7387]: I0308 03:24:57.545118 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.545578 master-0 kubenswrapper[7387]: I0308 03:24:57.545495 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.599576 master-0 kubenswrapper[7387]: I0308 03:24:57.599440 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:57.599576 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:57.599576 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:57.599576 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:57.600169 master-0 kubenswrapper[7387]: I0308 03:24:57.600123 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:57.647285 master-0 kubenswrapper[7387]: I0308 03:24:57.647220 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.647442 master-0 kubenswrapper[7387]: I0308 03:24:57.647338 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.647442 master-0 kubenswrapper[7387]: I0308 03:24:57.647427 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.647571 master-0 kubenswrapper[7387]: I0308 03:24:57.647539 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.647646 master-0 kubenswrapper[7387]: I0308 03:24:57.647559 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.663710 master-0 kubenswrapper[7387]: I0308 03:24:57.663686 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:57.666714 master-0 kubenswrapper[7387]: I0308 03:24:57.666671 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:24:58.176030 master-0 kubenswrapper[7387]: I0308 03:24:58.175972 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:24:58.180560 master-0 kubenswrapper[7387]: W0308 03:24:58.180511 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c72cba0_0e56_43a8_b4dc_be4d61d8586e.slice/crio-7a45419530364c188aef518a9de7d23efe25929852f0e5387cf646d78e26f13f WatchSource:0}: Error finding container 7a45419530364c188aef518a9de7d23efe25929852f0e5387cf646d78e26f13f: Status 404 returned error can't find the container with id 7a45419530364c188aef518a9de7d23efe25929852f0e5387cf646d78e26f13f Mar 08 03:24:58.599795 master-0 kubenswrapper[7387]: I0308 03:24:58.599711 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:58.599795 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:58.599795 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:58.599795 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:58.600879 master-0 kubenswrapper[7387]: I0308 03:24:58.599834 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:24:59.085208 master-0 kubenswrapper[7387]: I0308 03:24:59.085072 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4c72cba0-0e56-43a8-b4dc-be4d61d8586e","Type":"ContainerStarted","Data":"27d817f68d21ac51b2ebb172edc6d4964bf05b89fe7b9ddcc5a26865e8d3581b"} Mar 08 03:24:59.085208 master-0 kubenswrapper[7387]: I0308 03:24:59.085146 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4c72cba0-0e56-43a8-b4dc-be4d61d8586e","Type":"ContainerStarted","Data":"7a45419530364c188aef518a9de7d23efe25929852f0e5387cf646d78e26f13f"} Mar 08 03:24:59.105889 master-0 kubenswrapper[7387]: I0308 03:24:59.105787 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.105763279 podStartE2EDuration="2.105763279s" podCreationTimestamp="2026-03-08 03:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:24:59.102082813 +0000 UTC m=+835.496558524" watchObservedRunningTime="2026-03-08 03:24:59.105763279 +0000 UTC m=+835.500238970" Mar 08 03:24:59.599449 master-0 kubenswrapper[7387]: I0308 03:24:59.599387 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:24:59.599449 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:24:59.599449 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:24:59.599449 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:24:59.599722 master-0 kubenswrapper[7387]: I0308 03:24:59.599458 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:00.599783 master-0 kubenswrapper[7387]: I0308 03:25:00.599683 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:00.599783 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:00.599783 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:00.599783 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:00.600788 master-0 kubenswrapper[7387]: I0308 03:25:00.599807 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:01.599643 master-0 kubenswrapper[7387]: I0308 03:25:01.599533 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:01.599643 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:01.599643 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:01.599643 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:01.600805 master-0 kubenswrapper[7387]: I0308 03:25:01.599646 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:02.420361 master-0 kubenswrapper[7387]: I0308 03:25:02.420279 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 03:25:02.421560 master-0 kubenswrapper[7387]: I0308 03:25:02.421505 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.424421 master-0 kubenswrapper[7387]: I0308 03:25:02.424361 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-drk56" Mar 08 03:25:02.424631 master-0 kubenswrapper[7387]: I0308 03:25:02.424590 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 08 03:25:02.438900 master-0 kubenswrapper[7387]: I0308 03:25:02.438803 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 03:25:02.516343 master-0 kubenswrapper[7387]: I0308 03:25:02.516190 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.516343 master-0 kubenswrapper[7387]: I0308 03:25:02.516343 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.516722 master-0 kubenswrapper[7387]: I0308 03:25:02.516381 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.600095 master-0 kubenswrapper[7387]: I0308 03:25:02.599987 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:02.600095 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:02.600095 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:02.600095 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:02.600095 master-0 kubenswrapper[7387]: I0308 03:25:02.600085 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:02.618016 master-0 kubenswrapper[7387]: I0308 03:25:02.617945 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.618217 master-0 kubenswrapper[7387]: I0308 03:25:02.618044 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.618217 master-0 kubenswrapper[7387]: I0308 03:25:02.618080 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.618217 master-0 kubenswrapper[7387]: I0308 03:25:02.618079 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.618454 master-0 kubenswrapper[7387]: I0308 03:25:02.618266 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.649137 master-0 kubenswrapper[7387]: I0308 03:25:02.649030 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:02.759576 master-0 kubenswrapper[7387]: I0308 03:25:02.759489 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:03.257081 master-0 kubenswrapper[7387]: I0308 03:25:03.257031 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 08 03:25:03.599299 master-0 kubenswrapper[7387]: I0308 03:25:03.599188 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:03.599299 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:03.599299 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:03.599299 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:03.599715 master-0 kubenswrapper[7387]: I0308 03:25:03.599341 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:04.120116 master-0 kubenswrapper[7387]: I0308 03:25:04.119843 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3c20b192-755d-46cd-ab12-2e823b92222e","Type":"ContainerStarted","Data":"0f14e36a52435c9a7870808befbb0f157c9e7126b2ba8d72d22dd7d795a56f5e"} Mar 08 03:25:04.120116 master-0 kubenswrapper[7387]: I0308 03:25:04.119969 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3c20b192-755d-46cd-ab12-2e823b92222e","Type":"ContainerStarted","Data":"a708aa69cc052f931f58c87cb7019d54064fd8232a5208d8d5f9a13a69e77e36"} Mar 08 03:25:04.149520 master-0 kubenswrapper[7387]: I0308 03:25:04.149411 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.149383199 podStartE2EDuration="2.149383199s" podCreationTimestamp="2026-03-08 03:25:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:25:04.142611873 +0000 UTC m=+840.537087604" watchObservedRunningTime="2026-03-08 03:25:04.149383199 +0000 UTC m=+840.543858890" Mar 08 03:25:04.598999 master-0 kubenswrapper[7387]: I0308 03:25:04.598891 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:04.598999 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:04.598999 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:04.598999 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:04.598999 master-0 kubenswrapper[7387]: I0308 03:25:04.598981 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:05.030459 master-0 kubenswrapper[7387]: E0308 03:25:05.030360 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:25:05.032805 master-0 kubenswrapper[7387]: E0308 03:25:05.032617 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:25:05.034631 master-0 kubenswrapper[7387]: E0308 03:25:05.034579 7387 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 03:25:05.037125 master-0 kubenswrapper[7387]: E0308 03:25:05.034758 7387 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:25:05.599409 master-0 kubenswrapper[7387]: I0308 03:25:05.599353 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:05.599409 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:05.599409 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:05.599409 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:05.600655 master-0 kubenswrapper[7387]: I0308 03:25:05.600159 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:06.600567 master-0 kubenswrapper[7387]: I0308 03:25:06.600460 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:06.600567 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:06.600567 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:06.600567 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:06.600567 master-0 kubenswrapper[7387]: I0308 03:25:06.600548 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:07.599831 master-0 kubenswrapper[7387]: I0308 03:25:07.599767 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:07.599831 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:07.599831 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:07.599831 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:07.600509 master-0 kubenswrapper[7387]: I0308 03:25:07.600448 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:08.603072 master-0 kubenswrapper[7387]: I0308 03:25:08.602965 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:08.603072 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:08.603072 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:08.603072 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:08.604345 master-0 kubenswrapper[7387]: I0308 03:25:08.603113 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:09.060472 master-0 kubenswrapper[7387]: I0308 03:25:09.060385 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9cmmj_645d8c66-50e1-4e0e-ae02-5a766526652e/kube-multus-additional-cni-plugins/0.log" Mar 08 03:25:09.060633 master-0 kubenswrapper[7387]: I0308 03:25:09.060514 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:25:09.137425 master-0 kubenswrapper[7387]: I0308 03:25:09.137349 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir\") pod \"645d8c66-50e1-4e0e-ae02-5a766526652e\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " Mar 08 03:25:09.137646 master-0 kubenswrapper[7387]: I0308 03:25:09.137542 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist\") pod \"645d8c66-50e1-4e0e-ae02-5a766526652e\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " Mar 08 03:25:09.137646 master-0 kubenswrapper[7387]: I0308 03:25:09.137584 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "645d8c66-50e1-4e0e-ae02-5a766526652e" (UID: "645d8c66-50e1-4e0e-ae02-5a766526652e"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:09.137768 master-0 kubenswrapper[7387]: I0308 03:25:09.137673 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready\") pod \"645d8c66-50e1-4e0e-ae02-5a766526652e\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " Mar 08 03:25:09.137768 master-0 kubenswrapper[7387]: I0308 03:25:09.137744 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqddd\" (UniqueName: \"kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd\") pod \"645d8c66-50e1-4e0e-ae02-5a766526652e\" (UID: \"645d8c66-50e1-4e0e-ae02-5a766526652e\") " Mar 08 03:25:09.138205 master-0 kubenswrapper[7387]: I0308 03:25:09.138172 7387 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/645d8c66-50e1-4e0e-ae02-5a766526652e-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:09.138282 master-0 kubenswrapper[7387]: I0308 03:25:09.138208 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "645d8c66-50e1-4e0e-ae02-5a766526652e" (UID: "645d8c66-50e1-4e0e-ae02-5a766526652e"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:25:09.138849 master-0 kubenswrapper[7387]: I0308 03:25:09.138760 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready" (OuterVolumeSpecName: "ready") pod "645d8c66-50e1-4e0e-ae02-5a766526652e" (UID: "645d8c66-50e1-4e0e-ae02-5a766526652e"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:25:09.145455 master-0 kubenswrapper[7387]: I0308 03:25:09.145381 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd" (OuterVolumeSpecName: "kube-api-access-zqddd") pod "645d8c66-50e1-4e0e-ae02-5a766526652e" (UID: "645d8c66-50e1-4e0e-ae02-5a766526652e"). InnerVolumeSpecName "kube-api-access-zqddd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:09.159087 master-0 kubenswrapper[7387]: I0308 03:25:09.159009 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-9cmmj_645d8c66-50e1-4e0e-ae02-5a766526652e/kube-multus-additional-cni-plugins/0.log" Mar 08 03:25:09.159283 master-0 kubenswrapper[7387]: I0308 03:25:09.159108 7387 generic.go:334] "Generic (PLEG): container finished" podID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" exitCode=137 Mar 08 03:25:09.159283 master-0 kubenswrapper[7387]: I0308 03:25:09.159162 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" event={"ID":"645d8c66-50e1-4e0e-ae02-5a766526652e","Type":"ContainerDied","Data":"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5"} Mar 08 03:25:09.159283 master-0 kubenswrapper[7387]: I0308 03:25:09.159205 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" Mar 08 03:25:09.159283 master-0 kubenswrapper[7387]: I0308 03:25:09.159228 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-9cmmj" event={"ID":"645d8c66-50e1-4e0e-ae02-5a766526652e","Type":"ContainerDied","Data":"12c51f44e28e5558cd4bdffa4e53ad4825db01b2ba98d6f7f708ff6d84be0671"} Mar 08 03:25:09.159283 master-0 kubenswrapper[7387]: I0308 03:25:09.159267 7387 scope.go:117] "RemoveContainer" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" Mar 08 03:25:09.221165 master-0 kubenswrapper[7387]: I0308 03:25:09.221107 7387 scope.go:117] "RemoveContainer" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" Mar 08 03:25:09.225128 master-0 kubenswrapper[7387]: E0308 03:25:09.225070 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5\": container with ID starting with 15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5 not found: ID does not exist" containerID="15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5" Mar 08 03:25:09.225225 master-0 kubenswrapper[7387]: I0308 03:25:09.225141 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5"} err="failed to get container status \"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5\": rpc error: code = NotFound desc = could not find container \"15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5\": container with ID starting with 15f50c071f201867c43246dac6f792f0b50269dfa6fccfc5c002806ce76a47a5 not found: ID does not exist" Mar 08 03:25:09.243931 master-0 kubenswrapper[7387]: I0308 03:25:09.239533 7387 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/645d8c66-50e1-4e0e-ae02-5a766526652e-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:09.243931 master-0 kubenswrapper[7387]: I0308 03:25:09.239575 7387 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/645d8c66-50e1-4e0e-ae02-5a766526652e-ready\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:09.243931 master-0 kubenswrapper[7387]: I0308 03:25:09.239588 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqddd\" (UniqueName: \"kubernetes.io/projected/645d8c66-50e1-4e0e-ae02-5a766526652e-kube-api-access-zqddd\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:09.243931 master-0 kubenswrapper[7387]: I0308 03:25:09.243063 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9cmmj"] Mar 08 03:25:09.257854 master-0 kubenswrapper[7387]: I0308 03:25:09.257799 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-9cmmj"] Mar 08 03:25:09.598791 master-0 kubenswrapper[7387]: I0308 03:25:09.598703 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:09.598791 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:09.598791 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:09.598791 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:09.598791 master-0 kubenswrapper[7387]: I0308 03:25:09.598786 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:09.774541 master-0 kubenswrapper[7387]: I0308 03:25:09.774464 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" path="/var/lib/kubelet/pods/645d8c66-50e1-4e0e-ae02-5a766526652e/volumes" Mar 08 03:25:09.926510 master-0 kubenswrapper[7387]: I0308 03:25:09.926408 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:25:09.935668 master-0 kubenswrapper[7387]: I0308 03:25:09.935615 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:25:10.598826 master-0 kubenswrapper[7387]: I0308 03:25:10.598713 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:10.598826 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:10.598826 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:10.598826 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:10.599217 master-0 kubenswrapper[7387]: I0308 03:25:10.598833 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:11.599307 master-0 kubenswrapper[7387]: I0308 03:25:11.599240 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:11.599307 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:11.599307 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:11.599307 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:11.600112 master-0 kubenswrapper[7387]: I0308 03:25:11.599333 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:12.599356 master-0 kubenswrapper[7387]: I0308 03:25:12.599311 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:12.599356 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:12.599356 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:12.599356 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:12.600075 master-0 kubenswrapper[7387]: I0308 03:25:12.600043 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:12.861646 master-0 kubenswrapper[7387]: I0308 03:25:12.861495 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:25:12.861893 master-0 kubenswrapper[7387]: I0308 03:25:12.861737 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" containerName="installer" containerID="cri-o://27d817f68d21ac51b2ebb172edc6d4964bf05b89fe7b9ddcc5a26865e8d3581b" gracePeriod=30 Mar 08 03:25:13.200681 master-0 kubenswrapper[7387]: I0308 03:25:13.200564 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_4c72cba0-0e56-43a8-b4dc-be4d61d8586e/installer/0.log" Mar 08 03:25:13.200681 master-0 kubenswrapper[7387]: I0308 03:25:13.200608 7387 generic.go:334] "Generic (PLEG): container finished" podID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" containerID="27d817f68d21ac51b2ebb172edc6d4964bf05b89fe7b9ddcc5a26865e8d3581b" exitCode=1 Mar 08 03:25:13.200681 master-0 kubenswrapper[7387]: I0308 03:25:13.200639 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4c72cba0-0e56-43a8-b4dc-be4d61d8586e","Type":"ContainerDied","Data":"27d817f68d21ac51b2ebb172edc6d4964bf05b89fe7b9ddcc5a26865e8d3581b"} Mar 08 03:25:13.600448 master-0 kubenswrapper[7387]: I0308 03:25:13.600355 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:13.600448 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:13.600448 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:13.600448 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:13.600448 master-0 kubenswrapper[7387]: I0308 03:25:13.600439 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:13.777692 master-0 kubenswrapper[7387]: I0308 03:25:13.777647 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_4c72cba0-0e56-43a8-b4dc-be4d61d8586e/installer/0.log" Mar 08 03:25:13.777947 master-0 kubenswrapper[7387]: I0308 03:25:13.777742 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:25:13.827345 master-0 kubenswrapper[7387]: I0308 03:25:13.826943 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access\") pod \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " Mar 08 03:25:13.827345 master-0 kubenswrapper[7387]: I0308 03:25:13.827042 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir\") pod \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " Mar 08 03:25:13.827345 master-0 kubenswrapper[7387]: I0308 03:25:13.827094 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock\") pod \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\" (UID: \"4c72cba0-0e56-43a8-b4dc-be4d61d8586e\") " Mar 08 03:25:13.827653 master-0 kubenswrapper[7387]: I0308 03:25:13.827519 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock" (OuterVolumeSpecName: "var-lock") pod "4c72cba0-0e56-43a8-b4dc-be4d61d8586e" (UID: "4c72cba0-0e56-43a8-b4dc-be4d61d8586e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:13.827653 master-0 kubenswrapper[7387]: I0308 03:25:13.827569 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c72cba0-0e56-43a8-b4dc-be4d61d8586e" (UID: "4c72cba0-0e56-43a8-b4dc-be4d61d8586e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:13.830680 master-0 kubenswrapper[7387]: I0308 03:25:13.830635 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c72cba0-0e56-43a8-b4dc-be4d61d8586e" (UID: "4c72cba0-0e56-43a8-b4dc-be4d61d8586e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:13.929292 master-0 kubenswrapper[7387]: I0308 03:25:13.929145 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:13.929634 master-0 kubenswrapper[7387]: I0308 03:25:13.929566 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:13.929868 master-0 kubenswrapper[7387]: I0308 03:25:13.929843 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c72cba0-0e56-43a8-b4dc-be4d61d8586e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:14.212064 master-0 kubenswrapper[7387]: I0308 03:25:14.212008 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_4c72cba0-0e56-43a8-b4dc-be4d61d8586e/installer/0.log" Mar 08 03:25:14.212064 master-0 kubenswrapper[7387]: I0308 03:25:14.212061 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"4c72cba0-0e56-43a8-b4dc-be4d61d8586e","Type":"ContainerDied","Data":"7a45419530364c188aef518a9de7d23efe25929852f0e5387cf646d78e26f13f"} Mar 08 03:25:14.212483 master-0 kubenswrapper[7387]: I0308 03:25:14.212095 7387 scope.go:117] "RemoveContainer" containerID="27d817f68d21ac51b2ebb172edc6d4964bf05b89fe7b9ddcc5a26865e8d3581b" Mar 08 03:25:14.212483 master-0 kubenswrapper[7387]: I0308 03:25:14.212192 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 08 03:25:14.270202 master-0 kubenswrapper[7387]: I0308 03:25:14.270064 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:25:14.281023 master-0 kubenswrapper[7387]: I0308 03:25:14.280891 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 08 03:25:14.601038 master-0 kubenswrapper[7387]: I0308 03:25:14.600939 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:14.601038 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:14.601038 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:14.601038 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:14.602182 master-0 kubenswrapper[7387]: I0308 03:25:14.601047 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:15.443614 master-0 kubenswrapper[7387]: I0308 03:25:15.443506 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 03:25:15.444471 master-0 kubenswrapper[7387]: E0308 03:25:15.444406 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:25:15.444471 master-0 kubenswrapper[7387]: I0308 03:25:15.444458 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:25:15.444746 master-0 kubenswrapper[7387]: E0308 03:25:15.444511 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" containerName="installer" Mar 08 03:25:15.444746 master-0 kubenswrapper[7387]: I0308 03:25:15.444525 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" containerName="installer" Mar 08 03:25:15.444972 master-0 kubenswrapper[7387]: I0308 03:25:15.444752 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="645d8c66-50e1-4e0e-ae02-5a766526652e" containerName="kube-multus-additional-cni-plugins" Mar 08 03:25:15.444972 master-0 kubenswrapper[7387]: I0308 03:25:15.444789 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" containerName="installer" Mar 08 03:25:15.445598 master-0 kubenswrapper[7387]: I0308 03:25:15.445540 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.448606 master-0 kubenswrapper[7387]: I0308 03:25:15.448532 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 03:25:15.451837 master-0 kubenswrapper[7387]: I0308 03:25:15.451773 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-2tj6k" Mar 08 03:25:15.476035 master-0 kubenswrapper[7387]: I0308 03:25:15.475935 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 03:25:15.562261 master-0 kubenswrapper[7387]: I0308 03:25:15.562173 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.562261 master-0 kubenswrapper[7387]: I0308 03:25:15.562253 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.562660 master-0 kubenswrapper[7387]: I0308 03:25:15.562611 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.599799 master-0 kubenswrapper[7387]: I0308 03:25:15.599712 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:15.599799 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:15.599799 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:15.599799 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:15.600206 master-0 kubenswrapper[7387]: I0308 03:25:15.599812 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:15.664532 master-0 kubenswrapper[7387]: I0308 03:25:15.664415 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.665056 master-0 kubenswrapper[7387]: I0308 03:25:15.664610 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.665056 master-0 kubenswrapper[7387]: I0308 03:25:15.664617 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.665056 master-0 kubenswrapper[7387]: I0308 03:25:15.664652 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.665056 master-0 kubenswrapper[7387]: I0308 03:25:15.664719 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.684406 master-0 kubenswrapper[7387]: I0308 03:25:15.684368 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access\") pod \"installer-2-master-0\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:15.770505 master-0 kubenswrapper[7387]: I0308 03:25:15.770447 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c72cba0-0e56-43a8-b4dc-be4d61d8586e" path="/var/lib/kubelet/pods/4c72cba0-0e56-43a8-b4dc-be4d61d8586e/volumes" Mar 08 03:25:15.783024 master-0 kubenswrapper[7387]: I0308 03:25:15.782996 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:25:16.254552 master-0 kubenswrapper[7387]: I0308 03:25:16.254493 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 08 03:25:16.263991 master-0 kubenswrapper[7387]: W0308 03:25:16.263899 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6a7152f2_d51f_4e15_8e0a_92278cbecd53.slice/crio-7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5 WatchSource:0}: Error finding container 7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5: Status 404 returned error can't find the container with id 7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5 Mar 08 03:25:16.441153 master-0 kubenswrapper[7387]: I0308 03:25:16.441060 7387 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 03:25:16.441654 master-0 kubenswrapper[7387]: I0308 03:25:16.441607 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://da5c0193c648331dfa0a6bd33ec4c599a059bf9e4842b26f52002f9bec9abbb4" gracePeriod=30 Mar 08 03:25:16.442277 master-0 kubenswrapper[7387]: I0308 03:25:16.442241 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:25:16.442595 master-0 kubenswrapper[7387]: E0308 03:25:16.442505 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.442595 master-0 kubenswrapper[7387]: I0308 03:25:16.442521 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.442779 master-0 kubenswrapper[7387]: I0308 03:25:16.442741 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.442779 master-0 kubenswrapper[7387]: I0308 03:25:16.442773 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.443005 master-0 kubenswrapper[7387]: E0308 03:25:16.442982 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.443005 master-0 kubenswrapper[7387]: I0308 03:25:16.443002 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 08 03:25:16.444594 master-0 kubenswrapper[7387]: I0308 03:25:16.444549 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.581970 master-0 kubenswrapper[7387]: I0308 03:25:16.575706 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:25:16.581970 master-0 kubenswrapper[7387]: I0308 03:25:16.577600 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.581970 master-0 kubenswrapper[7387]: I0308 03:25:16.577855 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.599541 master-0 kubenswrapper[7387]: I0308 03:25:16.599467 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:16.599541 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:16.599541 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:16.599541 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:16.600129 master-0 kubenswrapper[7387]: I0308 03:25:16.599560 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:16.612058 master-0 kubenswrapper[7387]: I0308 03:25:16.611993 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:25:16.655004 master-0 kubenswrapper[7387]: I0308 03:25:16.654939 7387 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e71069a7-ad41-4d0b-b5a0-e8906dcc53f7" Mar 08 03:25:16.679168 master-0 kubenswrapper[7387]: I0308 03:25:16.679126 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 08 03:25:16.679722 master-0 kubenswrapper[7387]: I0308 03:25:16.679220 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 08 03:25:16.679722 master-0 kubenswrapper[7387]: I0308 03:25:16.679275 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:16.679722 master-0 kubenswrapper[7387]: I0308 03:25:16.679398 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:16.679722 master-0 kubenswrapper[7387]: I0308 03:25:16.679697 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.680001 master-0 kubenswrapper[7387]: I0308 03:25:16.679783 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.680001 master-0 kubenswrapper[7387]: I0308 03:25:16.679838 7387 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:16.680001 master-0 kubenswrapper[7387]: I0308 03:25:16.679852 7387 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:16.680001 master-0 kubenswrapper[7387]: I0308 03:25:16.679899 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.680001 master-0 kubenswrapper[7387]: I0308 03:25:16.679985 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.871296 master-0 kubenswrapper[7387]: I0308 03:25:16.871136 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:16.907220 master-0 kubenswrapper[7387]: W0308 03:25:16.907140 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68 WatchSource:0}: Error finding container 58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68: Status 404 returned error can't find the container with id 58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68 Mar 08 03:25:17.247476 master-0 kubenswrapper[7387]: I0308 03:25:17.247336 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-xhkzl_d5f84bd4-2803-41ff-a1d1-a549991fe895/multus-admission-controller/0.log" Mar 08 03:25:17.247476 master-0 kubenswrapper[7387]: I0308 03:25:17.247380 7387 generic.go:334] "Generic (PLEG): container finished" podID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerID="d8908e02467ded566e9d23379f605a2e44df49bd48cf230c5b0b05ea8c4f6b21" exitCode=137 Mar 08 03:25:17.247476 master-0 kubenswrapper[7387]: I0308 03:25:17.247423 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerDied","Data":"d8908e02467ded566e9d23379f605a2e44df49bd48cf230c5b0b05ea8c4f6b21"} Mar 08 03:25:17.249100 master-0 kubenswrapper[7387]: I0308 03:25:17.248964 7387 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="da5c0193c648331dfa0a6bd33ec4c599a059bf9e4842b26f52002f9bec9abbb4" exitCode=0 Mar 08 03:25:17.249100 master-0 kubenswrapper[7387]: I0308 03:25:17.249005 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfc903a3a09201aa3b1c76a517a337916f356be7b6618a2128b1dc4f4785ac63" Mar 08 03:25:17.249100 master-0 kubenswrapper[7387]: I0308 03:25:17.249020 7387 scope.go:117] "RemoveContainer" containerID="f80accad2b75f0dbc8ca9ec1b9207f9c29402e934558ea0edecba0bf20e9769f" Mar 08 03:25:17.249358 master-0 kubenswrapper[7387]: I0308 03:25:17.249286 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 08 03:25:17.255177 master-0 kubenswrapper[7387]: I0308 03:25:17.255135 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a7152f2-d51f-4e15-8e0a-92278cbecd53","Type":"ContainerStarted","Data":"6337e7946252e7bfd9c2e54f9544cec48f69509210920bb45fdd12f2048594e7"} Mar 08 03:25:17.255271 master-0 kubenswrapper[7387]: I0308 03:25:17.255190 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a7152f2-d51f-4e15-8e0a-92278cbecd53","Type":"ContainerStarted","Data":"7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5"} Mar 08 03:25:17.259692 master-0 kubenswrapper[7387]: I0308 03:25:17.259671 7387 generic.go:334] "Generic (PLEG): container finished" podID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerID="48906d4a9827177a4feca5f421bb263deddb2a2e07e0343746350be07efd8684" exitCode=0 Mar 08 03:25:17.259851 master-0 kubenswrapper[7387]: I0308 03:25:17.259834 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ddf7d93b-6a73-4de5-b984-cde6fba07b48","Type":"ContainerDied","Data":"48906d4a9827177a4feca5f421bb263deddb2a2e07e0343746350be07efd8684"} Mar 08 03:25:17.262442 master-0 kubenswrapper[7387]: I0308 03:25:17.262412 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} Mar 08 03:25:17.262513 master-0 kubenswrapper[7387]: I0308 03:25:17.262449 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68"} Mar 08 03:25:17.283722 master-0 kubenswrapper[7387]: I0308 03:25:17.283632 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.283610535 podStartE2EDuration="2.283610535s" podCreationTimestamp="2026-03-08 03:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:25:17.277196947 +0000 UTC m=+853.671672628" watchObservedRunningTime="2026-03-08 03:25:17.283610535 +0000 UTC m=+853.678086216" Mar 08 03:25:17.344048 master-0 kubenswrapper[7387]: I0308 03:25:17.342516 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 03:25:17.349062 master-0 kubenswrapper[7387]: I0308 03:25:17.348785 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.382842 master-0 kubenswrapper[7387]: I0308 03:25:17.382762 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 03:25:17.401197 master-0 kubenswrapper[7387]: I0308 03:25:17.401097 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.401197 master-0 kubenswrapper[7387]: I0308 03:25:17.401136 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.401197 master-0 kubenswrapper[7387]: I0308 03:25:17.401162 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.437669 master-0 kubenswrapper[7387]: I0308 03:25:17.437642 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-xhkzl_d5f84bd4-2803-41ff-a1d1-a549991fe895/multus-admission-controller/0.log" Mar 08 03:25:17.437793 master-0 kubenswrapper[7387]: I0308 03:25:17.437710 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:25:17.502678 master-0 kubenswrapper[7387]: I0308 03:25:17.502626 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") pod \"d5f84bd4-2803-41ff-a1d1-a549991fe895\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " Mar 08 03:25:17.502678 master-0 kubenswrapper[7387]: I0308 03:25:17.502685 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") pod \"d5f84bd4-2803-41ff-a1d1-a549991fe895\" (UID: \"d5f84bd4-2803-41ff-a1d1-a549991fe895\") " Mar 08 03:25:17.502962 master-0 kubenswrapper[7387]: I0308 03:25:17.502820 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.502962 master-0 kubenswrapper[7387]: I0308 03:25:17.502936 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.503049 master-0 kubenswrapper[7387]: I0308 03:25:17.502994 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.503049 master-0 kubenswrapper[7387]: I0308 03:25:17.503025 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.503257 master-0 kubenswrapper[7387]: I0308 03:25:17.503231 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.507990 master-0 kubenswrapper[7387]: I0308 03:25:17.505728 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "d5f84bd4-2803-41ff-a1d1-a549991fe895" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:25:17.507990 master-0 kubenswrapper[7387]: I0308 03:25:17.506002 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh" (OuterVolumeSpecName: "kube-api-access-7v2gh") pod "d5f84bd4-2803-41ff-a1d1-a549991fe895" (UID: "d5f84bd4-2803-41ff-a1d1-a549991fe895"). InnerVolumeSpecName "kube-api-access-7v2gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:17.521087 master-0 kubenswrapper[7387]: I0308 03:25:17.521028 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.599125 master-0 kubenswrapper[7387]: I0308 03:25:17.599053 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:17.599125 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:17.599125 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:17.599125 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:17.599565 master-0 kubenswrapper[7387]: I0308 03:25:17.599128 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:17.604851 master-0 kubenswrapper[7387]: I0308 03:25:17.604801 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v2gh\" (UniqueName: \"kubernetes.io/projected/d5f84bd4-2803-41ff-a1d1-a549991fe895-kube-api-access-7v2gh\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:17.604851 master-0 kubenswrapper[7387]: I0308 03:25:17.604838 7387 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d5f84bd4-2803-41ff-a1d1-a549991fe895-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:17.699721 master-0 kubenswrapper[7387]: I0308 03:25:17.699550 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:25:17.782245 master-0 kubenswrapper[7387]: I0308 03:25:17.781878 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 08 03:25:17.782578 master-0 kubenswrapper[7387]: I0308 03:25:17.782404 7387 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 08 03:25:17.801374 master-0 kubenswrapper[7387]: I0308 03:25:17.801046 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 03:25:17.801374 master-0 kubenswrapper[7387]: I0308 03:25:17.801080 7387 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e71069a7-ad41-4d0b-b5a0-e8906dcc53f7" Mar 08 03:25:17.808344 master-0 kubenswrapper[7387]: I0308 03:25:17.807622 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 08 03:25:17.808344 master-0 kubenswrapper[7387]: I0308 03:25:17.807855 7387 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e71069a7-ad41-4d0b-b5a0-e8906dcc53f7" Mar 08 03:25:18.254838 master-0 kubenswrapper[7387]: I0308 03:25:18.251869 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 08 03:25:18.260323 master-0 kubenswrapper[7387]: W0308 03:25:18.260246 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaea52bbe_5b64_45c7_8f8c_81d027f133d0.slice/crio-ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b WatchSource:0}: Error finding container ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b: Status 404 returned error can't find the container with id ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b Mar 08 03:25:18.273199 master-0 kubenswrapper[7387]: I0308 03:25:18.273130 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"aea52bbe-5b64-45c7-8f8c-81d027f133d0","Type":"ContainerStarted","Data":"ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b"} Mar 08 03:25:18.277979 master-0 kubenswrapper[7387]: I0308 03:25:18.276065 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" exitCode=0 Mar 08 03:25:18.277979 master-0 kubenswrapper[7387]: I0308 03:25:18.276133 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} Mar 08 03:25:18.281454 master-0 kubenswrapper[7387]: I0308 03:25:18.280549 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-xhkzl_d5f84bd4-2803-41ff-a1d1-a549991fe895/multus-admission-controller/0.log" Mar 08 03:25:18.281454 master-0 kubenswrapper[7387]: I0308 03:25:18.280677 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" event={"ID":"d5f84bd4-2803-41ff-a1d1-a549991fe895","Type":"ContainerDied","Data":"b303d9907e09a871fa5a36f0194c592a76421a2844b95a9ceaaef97f1d545abf"} Mar 08 03:25:18.281454 master-0 kubenswrapper[7387]: I0308 03:25:18.280695 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-xhkzl" Mar 08 03:25:18.281454 master-0 kubenswrapper[7387]: I0308 03:25:18.280736 7387 scope.go:117] "RemoveContainer" containerID="4c27d8bf0fe82333d5a0263568559ac58eb59de0b0e67b1c1334b664b1330158" Mar 08 03:25:18.346320 master-0 kubenswrapper[7387]: I0308 03:25:18.346265 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:25:18.350991 master-0 kubenswrapper[7387]: I0308 03:25:18.350879 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-xhkzl"] Mar 08 03:25:18.394889 master-0 kubenswrapper[7387]: I0308 03:25:18.394816 7387 scope.go:117] "RemoveContainer" containerID="d8908e02467ded566e9d23379f605a2e44df49bd48cf230c5b0b05ea8c4f6b21" Mar 08 03:25:18.598269 master-0 kubenswrapper[7387]: I0308 03:25:18.598223 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:18.598269 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:18.598269 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:18.598269 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:18.598470 master-0 kubenswrapper[7387]: I0308 03:25:18.598276 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:18.607072 master-0 kubenswrapper[7387]: I0308 03:25:18.607035 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:25:18.685925 master-0 kubenswrapper[7387]: I0308 03:25:18.682444 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bf2c720c-7700-4cdb-b9e9-9341479046d6/installer/0.log" Mar 08 03:25:18.685925 master-0 kubenswrapper[7387]: I0308 03:25:18.682504 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726221 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir\") pod \"bf2c720c-7700-4cdb-b9e9-9341479046d6\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726336 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock\") pod \"bf2c720c-7700-4cdb-b9e9-9341479046d6\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726386 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir\") pod \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726395 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bf2c720c-7700-4cdb-b9e9-9341479046d6" (UID: "bf2c720c-7700-4cdb-b9e9-9341479046d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726440 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access\") pod \"bf2c720c-7700-4cdb-b9e9-9341479046d6\" (UID: \"bf2c720c-7700-4cdb-b9e9-9341479046d6\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726456 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ddf7d93b-6a73-4de5-b984-cde6fba07b48" (UID: "ddf7d93b-6a73-4de5-b984-cde6fba07b48"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726461 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access\") pod \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726469 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock" (OuterVolumeSpecName: "var-lock") pod "bf2c720c-7700-4cdb-b9e9-9341479046d6" (UID: "bf2c720c-7700-4cdb-b9e9-9341479046d6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726523 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock\") pod \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\" (UID: \"ddf7d93b-6a73-4de5-b984-cde6fba07b48\") " Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.726861 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock" (OuterVolumeSpecName: "var-lock") pod "ddf7d93b-6a73-4de5-b984-cde6fba07b48" (UID: "ddf7d93b-6a73-4de5-b984-cde6fba07b48"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.727069 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.727086 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.727104 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ddf7d93b-6a73-4de5-b984-cde6fba07b48-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:18.728917 master-0 kubenswrapper[7387]: I0308 03:25:18.727117 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bf2c720c-7700-4cdb-b9e9-9341479046d6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:18.729704 master-0 kubenswrapper[7387]: I0308 03:25:18.729096 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bf2c720c-7700-4cdb-b9e9-9341479046d6" (UID: "bf2c720c-7700-4cdb-b9e9-9341479046d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:18.735935 master-0 kubenswrapper[7387]: I0308 03:25:18.729872 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ddf7d93b-6a73-4de5-b984-cde6fba07b48" (UID: "ddf7d93b-6a73-4de5-b984-cde6fba07b48"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:18.828526 master-0 kubenswrapper[7387]: I0308 03:25:18.828202 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf2c720c-7700-4cdb-b9e9-9341479046d6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:18.828526 master-0 kubenswrapper[7387]: I0308 03:25:18.828239 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddf7d93b-6a73-4de5-b984-cde6fba07b48-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:19.292941 master-0 kubenswrapper[7387]: I0308 03:25:19.292861 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"aea52bbe-5b64-45c7-8f8c-81d027f133d0","Type":"ContainerStarted","Data":"15100ba27484610dbf9b61547d49ce1603f2d498f9b1453c4fbb68314939da8d"} Mar 08 03:25:19.295342 master-0 kubenswrapper[7387]: I0308 03:25:19.295287 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bf2c720c-7700-4cdb-b9e9-9341479046d6/installer/0.log" Mar 08 03:25:19.295342 master-0 kubenswrapper[7387]: I0308 03:25:19.295344 7387 generic.go:334] "Generic (PLEG): container finished" podID="bf2c720c-7700-4cdb-b9e9-9341479046d6" containerID="4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f" exitCode=1 Mar 08 03:25:19.295520 master-0 kubenswrapper[7387]: I0308 03:25:19.295405 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf2c720c-7700-4cdb-b9e9-9341479046d6","Type":"ContainerDied","Data":"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f"} Mar 08 03:25:19.295520 master-0 kubenswrapper[7387]: I0308 03:25:19.295437 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bf2c720c-7700-4cdb-b9e9-9341479046d6","Type":"ContainerDied","Data":"768f1952eb5c9c4e206fc6f42ed6d3c451f1ab498187eab3b5dd94dd0db3d647"} Mar 08 03:25:19.295520 master-0 kubenswrapper[7387]: I0308 03:25:19.295458 7387 scope.go:117] "RemoveContainer" containerID="4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f" Mar 08 03:25:19.295699 master-0 kubenswrapper[7387]: I0308 03:25:19.295560 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 08 03:25:19.301415 master-0 kubenswrapper[7387]: I0308 03:25:19.301364 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"ddf7d93b-6a73-4de5-b984-cde6fba07b48","Type":"ContainerDied","Data":"32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb"} Mar 08 03:25:19.301521 master-0 kubenswrapper[7387]: I0308 03:25:19.301417 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb" Mar 08 03:25:19.301638 master-0 kubenswrapper[7387]: I0308 03:25:19.301388 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:25:19.311105 master-0 kubenswrapper[7387]: I0308 03:25:19.311059 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319"} Mar 08 03:25:19.311237 master-0 kubenswrapper[7387]: I0308 03:25:19.311107 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} Mar 08 03:25:19.311237 master-0 kubenswrapper[7387]: I0308 03:25:19.311125 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} Mar 08 03:25:19.311367 master-0 kubenswrapper[7387]: I0308 03:25:19.311293 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:25:19.323222 master-0 kubenswrapper[7387]: I0308 03:25:19.323143 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.323125042 podStartE2EDuration="2.323125042s" podCreationTimestamp="2026-03-08 03:25:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:25:19.31921897 +0000 UTC m=+855.713694661" watchObservedRunningTime="2026-03-08 03:25:19.323125042 +0000 UTC m=+855.717600733" Mar 08 03:25:19.327572 master-0 kubenswrapper[7387]: I0308 03:25:19.327534 7387 scope.go:117] "RemoveContainer" containerID="4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f" Mar 08 03:25:19.328000 master-0 kubenswrapper[7387]: E0308 03:25:19.327958 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f\": container with ID starting with 4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f not found: ID does not exist" containerID="4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f" Mar 08 03:25:19.328069 master-0 kubenswrapper[7387]: I0308 03:25:19.328010 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f"} err="failed to get container status \"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f\": rpc error: code = NotFound desc = could not find container \"4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f\": container with ID starting with 4d2476856e29b23f71b3efa17f6e2d132475b3255bd78a4c4c27299dcf99f68f not found: ID does not exist" Mar 08 03:25:19.352816 master-0 kubenswrapper[7387]: I0308 03:25:19.352679 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.3526597430000002 podStartE2EDuration="3.352659743s" podCreationTimestamp="2026-03-08 03:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:25:19.348783711 +0000 UTC m=+855.743259402" watchObservedRunningTime="2026-03-08 03:25:19.352659743 +0000 UTC m=+855.747135454" Mar 08 03:25:19.373639 master-0 kubenswrapper[7387]: I0308 03:25:19.373567 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:25:19.376866 master-0 kubenswrapper[7387]: I0308 03:25:19.376823 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 08 03:25:19.598511 master-0 kubenswrapper[7387]: I0308 03:25:19.598447 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:19.598511 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:19.598511 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:19.598511 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:19.598858 master-0 kubenswrapper[7387]: I0308 03:25:19.598527 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:19.768047 master-0 kubenswrapper[7387]: I0308 03:25:19.767990 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2c720c-7700-4cdb-b9e9-9341479046d6" path="/var/lib/kubelet/pods/bf2c720c-7700-4cdb-b9e9-9341479046d6/volumes" Mar 08 03:25:19.768641 master-0 kubenswrapper[7387]: I0308 03:25:19.768602 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" path="/var/lib/kubelet/pods/d5f84bd4-2803-41ff-a1d1-a549991fe895/volumes" Mar 08 03:25:20.600072 master-0 kubenswrapper[7387]: I0308 03:25:20.599970 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:20.600072 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:20.600072 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:20.600072 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:20.600072 master-0 kubenswrapper[7387]: I0308 03:25:20.600064 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:21.599746 master-0 kubenswrapper[7387]: I0308 03:25:21.599651 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:21.599746 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:21.599746 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:21.599746 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:21.600685 master-0 kubenswrapper[7387]: I0308 03:25:21.599740 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:22.599828 master-0 kubenswrapper[7387]: I0308 03:25:22.599750 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:22.599828 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:22.599828 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:22.599828 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:22.600760 master-0 kubenswrapper[7387]: I0308 03:25:22.599856 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:23.599106 master-0 kubenswrapper[7387]: I0308 03:25:23.599050 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:23.599106 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:23.599106 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:23.599106 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:23.599414 master-0 kubenswrapper[7387]: I0308 03:25:23.599127 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:24.598813 master-0 kubenswrapper[7387]: I0308 03:25:24.598701 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:24.598813 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:24.598813 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:24.598813 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:24.598813 master-0 kubenswrapper[7387]: I0308 03:25:24.598806 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:25.600170 master-0 kubenswrapper[7387]: I0308 03:25:25.600061 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:25.600170 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:25.600170 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:25.600170 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:25.600170 master-0 kubenswrapper[7387]: I0308 03:25:25.600158 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:26.598835 master-0 kubenswrapper[7387]: I0308 03:25:26.598715 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:26.598835 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:26.598835 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:26.598835 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:26.599326 master-0 kubenswrapper[7387]: I0308 03:25:26.598851 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:27.599654 master-0 kubenswrapper[7387]: I0308 03:25:27.599536 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:27.599654 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:27.599654 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:27.599654 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:27.600716 master-0 kubenswrapper[7387]: I0308 03:25:27.599668 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:28.601410 master-0 kubenswrapper[7387]: I0308 03:25:28.601293 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:28.601410 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:28.601410 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:28.601410 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:28.602368 master-0 kubenswrapper[7387]: I0308 03:25:28.601420 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:29.598811 master-0 kubenswrapper[7387]: I0308 03:25:29.598717 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:29.598811 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:29.598811 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:29.598811 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:29.599276 master-0 kubenswrapper[7387]: I0308 03:25:29.598831 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:30.599845 master-0 kubenswrapper[7387]: I0308 03:25:30.599753 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:30.599845 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:30.599845 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:30.599845 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:30.601014 master-0 kubenswrapper[7387]: I0308 03:25:30.599869 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:31.599575 master-0 kubenswrapper[7387]: I0308 03:25:31.599493 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:31.599575 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:31.599575 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:31.599575 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:31.600560 master-0 kubenswrapper[7387]: I0308 03:25:31.599576 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:32.599522 master-0 kubenswrapper[7387]: I0308 03:25:32.599381 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:32.599522 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:32.599522 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:32.599522 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:32.599522 master-0 kubenswrapper[7387]: I0308 03:25:32.599504 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:33.444459 master-0 kubenswrapper[7387]: I0308 03:25:33.444286 7387 generic.go:334] "Generic (PLEG): container finished" podID="2728b91e-d59a-4e85-b245-0f297e9377f9" containerID="b4185e1d0f2f95c6a9df7b27b993524a8893ce06520676f0b8d760044b63fa25" exitCode=0 Mar 08 03:25:33.444459 master-0 kubenswrapper[7387]: I0308 03:25:33.444333 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" event={"ID":"2728b91e-d59a-4e85-b245-0f297e9377f9","Type":"ContainerDied","Data":"b4185e1d0f2f95c6a9df7b27b993524a8893ce06520676f0b8d760044b63fa25"} Mar 08 03:25:33.444878 master-0 kubenswrapper[7387]: I0308 03:25:33.444787 7387 scope.go:117] "RemoveContainer" containerID="b4185e1d0f2f95c6a9df7b27b993524a8893ce06520676f0b8d760044b63fa25" Mar 08 03:25:33.598961 master-0 kubenswrapper[7387]: I0308 03:25:33.598875 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:33.598961 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:33.598961 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:33.598961 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:33.599178 master-0 kubenswrapper[7387]: I0308 03:25:33.598957 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:34.452031 master-0 kubenswrapper[7387]: I0308 03:25:34.451962 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" event={"ID":"2728b91e-d59a-4e85-b245-0f297e9377f9","Type":"ContainerStarted","Data":"04a6db532c723d834f569ef7e497439c69ec4da2c40238927f0c4d8610072950"} Mar 08 03:25:34.598847 master-0 kubenswrapper[7387]: I0308 03:25:34.598757 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:34.598847 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:34.598847 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:34.598847 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:34.599181 master-0 kubenswrapper[7387]: I0308 03:25:34.598900 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:34.961739 master-0 kubenswrapper[7387]: I0308 03:25:34.961680 7387 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:25:34.961970 master-0 kubenswrapper[7387]: E0308 03:25:34.961804 7387 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/etcd-pod.yaml\": /etc/kubernetes/manifests/etcd-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 08 03:25:34.962125 master-0 kubenswrapper[7387]: I0308 03:25:34.962085 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" gracePeriod=30 Mar 08 03:25:34.962187 master-0 kubenswrapper[7387]: I0308 03:25:34.962129 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" gracePeriod=30 Mar 08 03:25:34.962187 master-0 kubenswrapper[7387]: I0308 03:25:34.962126 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" gracePeriod=30 Mar 08 03:25:34.963040 master-0 kubenswrapper[7387]: I0308 03:25:34.962177 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" gracePeriod=30 Mar 08 03:25:34.963040 master-0 kubenswrapper[7387]: I0308 03:25:34.962171 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" gracePeriod=30 Mar 08 03:25:34.972047 master-0 kubenswrapper[7387]: I0308 03:25:34.971981 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:25:34.972260 master-0 kubenswrapper[7387]: E0308 03:25:34.972230 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:25:34.972260 master-0 kubenswrapper[7387]: I0308 03:25:34.972243 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:25:34.972260 master-0 kubenswrapper[7387]: E0308 03:25:34.972250 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2c720c-7700-4cdb-b9e9-9341479046d6" containerName="installer" Mar 08 03:25:34.972260 master-0 kubenswrapper[7387]: I0308 03:25:34.972256 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2c720c-7700-4cdb-b9e9-9341479046d6" containerName="installer" Mar 08 03:25:34.972260 master-0 kubenswrapper[7387]: E0308 03:25:34.972266 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972273 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972283 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="kube-rbac-proxy" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972289 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="kube-rbac-proxy" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972300 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972306 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972315 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972321 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972328 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="multus-admission-controller" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972334 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="multus-admission-controller" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972341 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972346 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972356 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972362 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972373 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972379 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972391 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972397 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: E0308 03:25:34.972404 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 03:25:34.972454 master-0 kubenswrapper[7387]: I0308 03:25:34.972411 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972511 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972528 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="kube-rbac-proxy" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972538 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f84bd4-2803-41ff-a1d1-a549991fe895" containerName="multus-admission-controller" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972546 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972554 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2c720c-7700-4cdb-b9e9-9341479046d6" containerName="installer" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972565 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972574 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972583 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 08 03:25:34.973142 master-0 kubenswrapper[7387]: I0308 03:25:34.972593 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:25:35.026933 master-0 kubenswrapper[7387]: I0308 03:25:35.026869 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.027086 master-0 kubenswrapper[7387]: I0308 03:25:35.027042 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.027128 master-0 kubenswrapper[7387]: I0308 03:25:35.027109 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.027242 master-0 kubenswrapper[7387]: I0308 03:25:35.027224 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.027275 master-0 kubenswrapper[7387]: I0308 03:25:35.027255 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.027364 master-0 kubenswrapper[7387]: I0308 03:25:35.027350 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128368 master-0 kubenswrapper[7387]: I0308 03:25:35.128313 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128482 master-0 kubenswrapper[7387]: I0308 03:25:35.128379 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128482 master-0 kubenswrapper[7387]: I0308 03:25:35.128398 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128482 master-0 kubenswrapper[7387]: I0308 03:25:35.128435 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128482 master-0 kubenswrapper[7387]: I0308 03:25:35.128461 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128632 master-0 kubenswrapper[7387]: I0308 03:25:35.128584 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128675 master-0 kubenswrapper[7387]: I0308 03:25:35.128645 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128675 master-0 kubenswrapper[7387]: I0308 03:25:35.128612 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128838 master-0 kubenswrapper[7387]: I0308 03:25:35.128676 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.128838 master-0 kubenswrapper[7387]: I0308 03:25:35.128716 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.129054 master-0 kubenswrapper[7387]: I0308 03:25:35.128999 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.129109 master-0 kubenswrapper[7387]: I0308 03:25:35.129051 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:25:35.462946 master-0 kubenswrapper[7387]: I0308 03:25:35.462830 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 03:25:35.464218 master-0 kubenswrapper[7387]: I0308 03:25:35.464159 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 03:25:35.467009 master-0 kubenswrapper[7387]: I0308 03:25:35.466893 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" exitCode=2 Mar 08 03:25:35.467009 master-0 kubenswrapper[7387]: I0308 03:25:35.466985 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" exitCode=0 Mar 08 03:25:35.467009 master-0 kubenswrapper[7387]: I0308 03:25:35.467001 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" exitCode=2 Mar 08 03:25:35.469270 master-0 kubenswrapper[7387]: I0308 03:25:35.469187 7387 generic.go:334] "Generic (PLEG): container finished" podID="3c20b192-755d-46cd-ab12-2e823b92222e" containerID="0f14e36a52435c9a7870808befbb0f157c9e7126b2ba8d72d22dd7d795a56f5e" exitCode=0 Mar 08 03:25:35.469270 master-0 kubenswrapper[7387]: I0308 03:25:35.469248 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3c20b192-755d-46cd-ab12-2e823b92222e","Type":"ContainerDied","Data":"0f14e36a52435c9a7870808befbb0f157c9e7126b2ba8d72d22dd7d795a56f5e"} Mar 08 03:25:35.600241 master-0 kubenswrapper[7387]: I0308 03:25:35.600132 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:35.600241 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:35.600241 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:35.600241 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:35.601016 master-0 kubenswrapper[7387]: I0308 03:25:35.600249 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:36.602311 master-0 kubenswrapper[7387]: I0308 03:25:36.602201 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:36.602311 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:36.602311 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:36.602311 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:36.603358 master-0 kubenswrapper[7387]: I0308 03:25:36.602342 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:36.893096 master-0 kubenswrapper[7387]: I0308 03:25:36.893017 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:36.958181 master-0 kubenswrapper[7387]: I0308 03:25:36.958101 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir\") pod \"3c20b192-755d-46cd-ab12-2e823b92222e\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " Mar 08 03:25:36.958377 master-0 kubenswrapper[7387]: I0308 03:25:36.958224 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c20b192-755d-46cd-ab12-2e823b92222e" (UID: "3c20b192-755d-46cd-ab12-2e823b92222e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:36.958377 master-0 kubenswrapper[7387]: I0308 03:25:36.958312 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access\") pod \"3c20b192-755d-46cd-ab12-2e823b92222e\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " Mar 08 03:25:36.958507 master-0 kubenswrapper[7387]: I0308 03:25:36.958383 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock\") pod \"3c20b192-755d-46cd-ab12-2e823b92222e\" (UID: \"3c20b192-755d-46cd-ab12-2e823b92222e\") " Mar 08 03:25:36.958724 master-0 kubenswrapper[7387]: I0308 03:25:36.958637 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock" (OuterVolumeSpecName: "var-lock") pod "3c20b192-755d-46cd-ab12-2e823b92222e" (UID: "3c20b192-755d-46cd-ab12-2e823b92222e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:25:36.959014 master-0 kubenswrapper[7387]: I0308 03:25:36.958968 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:36.959014 master-0 kubenswrapper[7387]: I0308 03:25:36.959009 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c20b192-755d-46cd-ab12-2e823b92222e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:36.962410 master-0 kubenswrapper[7387]: I0308 03:25:36.962374 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c20b192-755d-46cd-ab12-2e823b92222e" (UID: "3c20b192-755d-46cd-ab12-2e823b92222e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:25:37.061101 master-0 kubenswrapper[7387]: I0308 03:25:37.060951 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c20b192-755d-46cd-ab12-2e823b92222e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:25:37.490424 master-0 kubenswrapper[7387]: I0308 03:25:37.490266 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3c20b192-755d-46cd-ab12-2e823b92222e","Type":"ContainerDied","Data":"a708aa69cc052f931f58c87cb7019d54064fd8232a5208d8d5f9a13a69e77e36"} Mar 08 03:25:37.490424 master-0 kubenswrapper[7387]: I0308 03:25:37.490351 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a708aa69cc052f931f58c87cb7019d54064fd8232a5208d8d5f9a13a69e77e36" Mar 08 03:25:37.490424 master-0 kubenswrapper[7387]: I0308 03:25:37.490305 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 03:25:37.600089 master-0 kubenswrapper[7387]: I0308 03:25:37.600003 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:37.600089 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:37.600089 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:37.600089 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:37.600699 master-0 kubenswrapper[7387]: I0308 03:25:37.600105 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:38.600138 master-0 kubenswrapper[7387]: I0308 03:25:38.599887 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:38.600138 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:38.600138 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:38.600138 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:38.601143 master-0 kubenswrapper[7387]: I0308 03:25:38.600166 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:39.600503 master-0 kubenswrapper[7387]: I0308 03:25:39.600413 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:39.600503 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:39.600503 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:39.600503 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:39.601492 master-0 kubenswrapper[7387]: I0308 03:25:39.600500 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:40.599086 master-0 kubenswrapper[7387]: I0308 03:25:40.599020 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:40.599086 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:40.599086 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:40.599086 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:40.599468 master-0 kubenswrapper[7387]: I0308 03:25:40.599088 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:41.602548 master-0 kubenswrapper[7387]: I0308 03:25:41.602470 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:41.602548 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:41.602548 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:41.602548 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:41.603088 master-0 kubenswrapper[7387]: I0308 03:25:41.602576 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:42.600332 master-0 kubenswrapper[7387]: I0308 03:25:42.600244 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:42.600332 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:42.600332 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:42.600332 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:42.600811 master-0 kubenswrapper[7387]: I0308 03:25:42.600348 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:43.599775 master-0 kubenswrapper[7387]: I0308 03:25:43.599694 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:43.599775 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:43.599775 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:43.599775 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:43.601083 master-0 kubenswrapper[7387]: I0308 03:25:43.599793 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:44.599734 master-0 kubenswrapper[7387]: I0308 03:25:44.599649 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:44.599734 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:44.599734 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:44.599734 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:44.600994 master-0 kubenswrapper[7387]: I0308 03:25:44.600893 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:45.599839 master-0 kubenswrapper[7387]: I0308 03:25:45.599757 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:45.599839 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:45.599839 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:45.599839 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:45.600841 master-0 kubenswrapper[7387]: I0308 03:25:45.599942 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:46.599618 master-0 kubenswrapper[7387]: I0308 03:25:46.599533 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:46.599618 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:46.599618 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:46.599618 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:46.600125 master-0 kubenswrapper[7387]: I0308 03:25:46.599642 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:47.599018 master-0 kubenswrapper[7387]: I0308 03:25:47.598936 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:47.599018 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:47.599018 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:47.599018 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:47.599959 master-0 kubenswrapper[7387]: I0308 03:25:47.599022 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:48.603221 master-0 kubenswrapper[7387]: I0308 03:25:48.603136 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:48.603221 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:48.603221 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:48.603221 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:48.604410 master-0 kubenswrapper[7387]: I0308 03:25:48.603257 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:49.603506 master-0 kubenswrapper[7387]: I0308 03:25:49.603421 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:49.603506 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:49.603506 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:49.603506 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:49.603506 master-0 kubenswrapper[7387]: I0308 03:25:49.603490 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:49.614679 master-0 kubenswrapper[7387]: I0308 03:25:49.614607 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" exitCode=1 Mar 08 03:25:49.614940 master-0 kubenswrapper[7387]: I0308 03:25:49.614680 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65"} Mar 08 03:25:49.614940 master-0 kubenswrapper[7387]: I0308 03:25:49.614804 7387 scope.go:117] "RemoveContainer" containerID="e305d74af325e5eeb0f6ddb53f983c1d6252a98bbdc0c950b558e6fbfd49c54c" Mar 08 03:25:49.615570 master-0 kubenswrapper[7387]: I0308 03:25:49.615505 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:25:49.615973 master-0 kubenswrapper[7387]: E0308 03:25:49.615895 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:25:50.599342 master-0 kubenswrapper[7387]: I0308 03:25:50.599269 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:50.599342 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:50.599342 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:50.599342 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:50.599667 master-0 kubenswrapper[7387]: I0308 03:25:50.599351 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:51.570803 master-0 kubenswrapper[7387]: I0308 03:25:51.570740 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:25:51.572989 master-0 kubenswrapper[7387]: I0308 03:25:51.572950 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:25:51.573555 master-0 kubenswrapper[7387]: E0308 03:25:51.573517 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:25:51.599796 master-0 kubenswrapper[7387]: I0308 03:25:51.599701 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:51.599796 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:51.599796 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:51.599796 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:51.600297 master-0 kubenswrapper[7387]: I0308 03:25:51.599797 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:52.437938 master-0 kubenswrapper[7387]: E0308 03:25:52.437805 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:25:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:25:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:25:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:25:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:25:52.600166 master-0 kubenswrapper[7387]: I0308 03:25:52.600084 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:52.600166 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:52.600166 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:52.600166 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:52.600833 master-0 kubenswrapper[7387]: I0308 03:25:52.600193 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:53.319719 master-0 kubenswrapper[7387]: I0308 03:25:53.319630 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:25:53.320492 master-0 kubenswrapper[7387]: I0308 03:25:53.320449 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:25:53.320853 master-0 kubenswrapper[7387]: E0308 03:25:53.320800 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:25:53.600334 master-0 kubenswrapper[7387]: I0308 03:25:53.600201 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:53.600334 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:53.600334 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:53.600334 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:53.600334 master-0 kubenswrapper[7387]: I0308 03:25:53.600293 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:53.695199 master-0 kubenswrapper[7387]: E0308 03:25:53.695105 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:25:53.739112 master-0 kubenswrapper[7387]: I0308 03:25:53.739052 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:25:53.740378 master-0 kubenswrapper[7387]: I0308 03:25:53.740146 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:25:53.742180 master-0 kubenswrapper[7387]: E0308 03:25:53.741977 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:25:54.600163 master-0 kubenswrapper[7387]: I0308 03:25:54.600081 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:54.600163 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:54.600163 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:54.600163 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:54.600163 master-0 kubenswrapper[7387]: I0308 03:25:54.600154 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:55.599946 master-0 kubenswrapper[7387]: I0308 03:25:55.599837 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:55.599946 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:55.599946 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:55.599946 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:55.600542 master-0 kubenswrapper[7387]: I0308 03:25:55.599964 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:56.600274 master-0 kubenswrapper[7387]: I0308 03:25:56.600168 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:56.600274 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:56.600274 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:56.600274 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:56.601399 master-0 kubenswrapper[7387]: I0308 03:25:56.600275 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:57.599288 master-0 kubenswrapper[7387]: I0308 03:25:57.599194 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:57.599288 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:57.599288 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:57.599288 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:57.599787 master-0 kubenswrapper[7387]: I0308 03:25:57.599307 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:57.697742 master-0 kubenswrapper[7387]: I0308 03:25:57.697650 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/3.log" Mar 08 03:25:57.698669 master-0 kubenswrapper[7387]: I0308 03:25:57.698263 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/2.log" Mar 08 03:25:57.698797 master-0 kubenswrapper[7387]: I0308 03:25:57.698744 7387 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" exitCode=1 Mar 08 03:25:57.698797 master-0 kubenswrapper[7387]: I0308 03:25:57.698784 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerDied","Data":"3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3"} Mar 08 03:25:57.699039 master-0 kubenswrapper[7387]: I0308 03:25:57.698820 7387 scope.go:117] "RemoveContainer" containerID="1d5309bb49bc359c6f650d35b0215dfd107ee09ec728eed9abd6a570ec1d8886" Mar 08 03:25:57.699639 master-0 kubenswrapper[7387]: I0308 03:25:57.699531 7387 scope.go:117] "RemoveContainer" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" Mar 08 03:25:57.700165 master-0 kubenswrapper[7387]: E0308 03:25:57.700089 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:25:58.600620 master-0 kubenswrapper[7387]: I0308 03:25:58.600378 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:58.600620 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:58.600620 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:58.600620 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:58.600969 master-0 kubenswrapper[7387]: I0308 03:25:58.600630 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:25:58.710209 master-0 kubenswrapper[7387]: I0308 03:25:58.710113 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/3.log" Mar 08 03:25:59.599768 master-0 kubenswrapper[7387]: I0308 03:25:59.599653 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:25:59.599768 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:25:59.599768 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:25:59.599768 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:25:59.600290 master-0 kubenswrapper[7387]: I0308 03:25:59.599764 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:00.599743 master-0 kubenswrapper[7387]: I0308 03:26:00.599659 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:00.599743 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:00.599743 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:00.599743 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:00.601319 master-0 kubenswrapper[7387]: I0308 03:26:00.599745 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:01.599484 master-0 kubenswrapper[7387]: I0308 03:26:01.599373 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:01.599484 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:01.599484 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:01.599484 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:01.599484 master-0 kubenswrapper[7387]: I0308 03:26:01.599469 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:01.739180 master-0 kubenswrapper[7387]: I0308 03:26:01.738982 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:26:01.739180 master-0 kubenswrapper[7387]: I0308 03:26:01.739063 7387 generic.go:334] "Generic (PLEG): container finished" podID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerID="6337e7946252e7bfd9c2e54f9544cec48f69509210920bb45fdd12f2048594e7" exitCode=1 Mar 08 03:26:01.739180 master-0 kubenswrapper[7387]: I0308 03:26:01.739107 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a7152f2-d51f-4e15-8e0a-92278cbecd53","Type":"ContainerDied","Data":"6337e7946252e7bfd9c2e54f9544cec48f69509210920bb45fdd12f2048594e7"} Mar 08 03:26:02.438659 master-0 kubenswrapper[7387]: E0308 03:26:02.438549 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:02.599530 master-0 kubenswrapper[7387]: I0308 03:26:02.599440 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:02.599530 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:02.599530 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:02.599530 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:02.599530 master-0 kubenswrapper[7387]: I0308 03:26:02.599527 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:03.164182 master-0 kubenswrapper[7387]: I0308 03:26:03.164086 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:26:03.164321 master-0 kubenswrapper[7387]: I0308 03:26:03.164238 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:26:03.296259 master-0 kubenswrapper[7387]: I0308 03:26:03.296160 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir\") pod \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " Mar 08 03:26:03.296259 master-0 kubenswrapper[7387]: I0308 03:26:03.296258 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock\") pod \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " Mar 08 03:26:03.296740 master-0 kubenswrapper[7387]: I0308 03:26:03.296360 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access\") pod \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\" (UID: \"6a7152f2-d51f-4e15-8e0a-92278cbecd53\") " Mar 08 03:26:03.296740 master-0 kubenswrapper[7387]: I0308 03:26:03.296479 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock" (OuterVolumeSpecName: "var-lock") pod "6a7152f2-d51f-4e15-8e0a-92278cbecd53" (UID: "6a7152f2-d51f-4e15-8e0a-92278cbecd53"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:03.297147 master-0 kubenswrapper[7387]: I0308 03:26:03.296440 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6a7152f2-d51f-4e15-8e0a-92278cbecd53" (UID: "6a7152f2-d51f-4e15-8e0a-92278cbecd53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:03.297258 master-0 kubenswrapper[7387]: I0308 03:26:03.297073 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:03.302358 master-0 kubenswrapper[7387]: I0308 03:26:03.302255 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6a7152f2-d51f-4e15-8e0a-92278cbecd53" (UID: "6a7152f2-d51f-4e15-8e0a-92278cbecd53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:26:03.398871 master-0 kubenswrapper[7387]: I0308 03:26:03.398633 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:03.398871 master-0 kubenswrapper[7387]: I0308 03:26:03.398700 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a7152f2-d51f-4e15-8e0a-92278cbecd53-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:03.599501 master-0 kubenswrapper[7387]: I0308 03:26:03.599402 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:03.599501 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:03.599501 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:03.599501 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:03.599862 master-0 kubenswrapper[7387]: I0308 03:26:03.599529 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:03.695683 master-0 kubenswrapper[7387]: E0308 03:26:03.695501 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:03.760942 master-0 kubenswrapper[7387]: I0308 03:26:03.760840 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_aea52bbe-5b64-45c7-8f8c-81d027f133d0/installer/0.log" Mar 08 03:26:03.761234 master-0 kubenswrapper[7387]: I0308 03:26:03.760952 7387 generic.go:334] "Generic (PLEG): container finished" podID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerID="15100ba27484610dbf9b61547d49ce1603f2d498f9b1453c4fbb68314939da8d" exitCode=1 Mar 08 03:26:03.764215 master-0 kubenswrapper[7387]: I0308 03:26:03.764150 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:26:03.764362 master-0 kubenswrapper[7387]: I0308 03:26:03.764304 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:26:03.772519 master-0 kubenswrapper[7387]: I0308 03:26:03.772445 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"aea52bbe-5b64-45c7-8f8c-81d027f133d0","Type":"ContainerDied","Data":"15100ba27484610dbf9b61547d49ce1603f2d498f9b1453c4fbb68314939da8d"} Mar 08 03:26:03.772519 master-0 kubenswrapper[7387]: I0308 03:26:03.772514 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"6a7152f2-d51f-4e15-8e0a-92278cbecd53","Type":"ContainerDied","Data":"7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5"} Mar 08 03:26:03.772814 master-0 kubenswrapper[7387]: I0308 03:26:03.772545 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5" Mar 08 03:26:04.599747 master-0 kubenswrapper[7387]: I0308 03:26:04.599673 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:04.599747 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:04.599747 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:04.599747 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:04.601010 master-0 kubenswrapper[7387]: I0308 03:26:04.599771 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:04.760489 master-0 kubenswrapper[7387]: I0308 03:26:04.760421 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:26:04.760898 master-0 kubenswrapper[7387]: E0308 03:26:04.760841 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:26:05.129699 master-0 kubenswrapper[7387]: E0308 03:26:05.129621 7387 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-conmon-b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-conmon-2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429.scope\": RecentStats: unable to find data in memory cache]" Mar 08 03:26:05.141073 master-0 kubenswrapper[7387]: I0308 03:26:05.141011 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 03:26:05.141893 master-0 kubenswrapper[7387]: I0308 03:26:05.141843 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 03:26:05.142724 master-0 kubenswrapper[7387]: I0308 03:26:05.142672 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 03:26:05.143375 master-0 kubenswrapper[7387]: I0308 03:26:05.143342 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 03:26:05.144963 master-0 kubenswrapper[7387]: I0308 03:26:05.144888 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:26:05.148104 master-0 kubenswrapper[7387]: I0308 03:26:05.147962 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_aea52bbe-5b64-45c7-8f8c-81d027f133d0/installer/0.log" Mar 08 03:26:05.148104 master-0 kubenswrapper[7387]: I0308 03:26:05.148086 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:26:05.328041 master-0 kubenswrapper[7387]: I0308 03:26:05.327947 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.328041 master-0 kubenswrapper[7387]: I0308 03:26:05.328027 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328074 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328099 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328234 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir\") pod \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328143 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328161 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328276 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aea52bbe-5b64-45c7-8f8c-81d027f133d0" (UID: "aea52bbe-5b64-45c7-8f8c-81d027f133d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.328417 master-0 kubenswrapper[7387]: I0308 03:26:05.328333 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access\") pod \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328427 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328476 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock\") pod \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\" (UID: \"aea52bbe-5b64-45c7-8f8c-81d027f133d0\") " Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328595 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328691 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328713 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328745 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328789 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "aea52bbe-5b64-45c7-8f8c-81d027f133d0" (UID: "aea52bbe-5b64-45c7-8f8c-81d027f133d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.329140 master-0 kubenswrapper[7387]: I0308 03:26:05.328853 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329449 7387 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329489 7387 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329515 7387 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329541 7387 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329566 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329593 7387 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329617 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aea52bbe-5b64-45c7-8f8c-81d027f133d0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.329814 master-0 kubenswrapper[7387]: I0308 03:26:05.329642 7387 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.333742 master-0 kubenswrapper[7387]: I0308 03:26:05.333647 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aea52bbe-5b64-45c7-8f8c-81d027f133d0" (UID: "aea52bbe-5b64-45c7-8f8c-81d027f133d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:26:05.432078 master-0 kubenswrapper[7387]: I0308 03:26:05.431835 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aea52bbe-5b64-45c7-8f8c-81d027f133d0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:26:05.600234 master-0 kubenswrapper[7387]: I0308 03:26:05.600151 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:05.600234 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:05.600234 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:05.600234 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:05.600994 master-0 kubenswrapper[7387]: I0308 03:26:05.600245 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:05.773417 master-0 kubenswrapper[7387]: I0308 03:26:05.773301 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 08 03:26:05.785726 master-0 kubenswrapper[7387]: I0308 03:26:05.785660 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_aea52bbe-5b64-45c7-8f8c-81d027f133d0/installer/0.log" Mar 08 03:26:05.785885 master-0 kubenswrapper[7387]: I0308 03:26:05.785814 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"aea52bbe-5b64-45c7-8f8c-81d027f133d0","Type":"ContainerDied","Data":"ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b"} Mar 08 03:26:05.785885 master-0 kubenswrapper[7387]: I0308 03:26:05.785857 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b" Mar 08 03:26:05.786220 master-0 kubenswrapper[7387]: I0308 03:26:05.785879 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:26:05.789450 master-0 kubenswrapper[7387]: I0308 03:26:05.789403 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 08 03:26:05.790755 master-0 kubenswrapper[7387]: I0308 03:26:05.790709 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 08 03:26:05.791848 master-0 kubenswrapper[7387]: I0308 03:26:05.791786 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 08 03:26:05.792606 master-0 kubenswrapper[7387]: I0308 03:26:05.792544 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 08 03:26:05.795615 master-0 kubenswrapper[7387]: I0308 03:26:05.795566 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" exitCode=137 Mar 08 03:26:05.795615 master-0 kubenswrapper[7387]: I0308 03:26:05.795605 7387 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" exitCode=137 Mar 08 03:26:05.795966 master-0 kubenswrapper[7387]: I0308 03:26:05.795662 7387 scope.go:117] "RemoveContainer" containerID="5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" Mar 08 03:26:05.795966 master-0 kubenswrapper[7387]: I0308 03:26:05.795760 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:26:05.825857 master-0 kubenswrapper[7387]: I0308 03:26:05.825797 7387 scope.go:117] "RemoveContainer" containerID="d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" Mar 08 03:26:05.848994 master-0 kubenswrapper[7387]: I0308 03:26:05.848939 7387 scope.go:117] "RemoveContainer" containerID="e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" Mar 08 03:26:05.871590 master-0 kubenswrapper[7387]: I0308 03:26:05.871531 7387 scope.go:117] "RemoveContainer" containerID="b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" Mar 08 03:26:05.894258 master-0 kubenswrapper[7387]: I0308 03:26:05.894187 7387 scope.go:117] "RemoveContainer" containerID="2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" Mar 08 03:26:05.915295 master-0 kubenswrapper[7387]: I0308 03:26:05.915245 7387 scope.go:117] "RemoveContainer" containerID="2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9" Mar 08 03:26:05.938443 master-0 kubenswrapper[7387]: I0308 03:26:05.938386 7387 scope.go:117] "RemoveContainer" containerID="ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee" Mar 08 03:26:05.963601 master-0 kubenswrapper[7387]: I0308 03:26:05.963437 7387 scope.go:117] "RemoveContainer" containerID="82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f" Mar 08 03:26:05.989349 master-0 kubenswrapper[7387]: I0308 03:26:05.989240 7387 scope.go:117] "RemoveContainer" containerID="5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" Mar 08 03:26:05.990361 master-0 kubenswrapper[7387]: E0308 03:26:05.990276 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11\": container with ID starting with 5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11 not found: ID does not exist" containerID="5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" Mar 08 03:26:05.990508 master-0 kubenswrapper[7387]: I0308 03:26:05.990351 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11"} err="failed to get container status \"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11\": rpc error: code = NotFound desc = could not find container \"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11\": container with ID starting with 5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11 not found: ID does not exist" Mar 08 03:26:05.990508 master-0 kubenswrapper[7387]: I0308 03:26:05.990394 7387 scope.go:117] "RemoveContainer" containerID="d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" Mar 08 03:26:05.991233 master-0 kubenswrapper[7387]: E0308 03:26:05.991161 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5\": container with ID starting with d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5 not found: ID does not exist" containerID="d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" Mar 08 03:26:05.991398 master-0 kubenswrapper[7387]: I0308 03:26:05.991263 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5"} err="failed to get container status \"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5\": rpc error: code = NotFound desc = could not find container \"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5\": container with ID starting with d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5 not found: ID does not exist" Mar 08 03:26:05.991398 master-0 kubenswrapper[7387]: I0308 03:26:05.991363 7387 scope.go:117] "RemoveContainer" containerID="e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" Mar 08 03:26:05.992440 master-0 kubenswrapper[7387]: E0308 03:26:05.992340 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509\": container with ID starting with e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509 not found: ID does not exist" containerID="e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" Mar 08 03:26:05.992604 master-0 kubenswrapper[7387]: I0308 03:26:05.992423 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509"} err="failed to get container status \"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509\": rpc error: code = NotFound desc = could not find container \"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509\": container with ID starting with e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509 not found: ID does not exist" Mar 08 03:26:05.992604 master-0 kubenswrapper[7387]: I0308 03:26:05.992504 7387 scope.go:117] "RemoveContainer" containerID="b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" Mar 08 03:26:05.993539 master-0 kubenswrapper[7387]: E0308 03:26:05.993460 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a\": container with ID starting with b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a not found: ID does not exist" containerID="b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" Mar 08 03:26:05.993539 master-0 kubenswrapper[7387]: I0308 03:26:05.993519 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a"} err="failed to get container status \"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a\": rpc error: code = NotFound desc = could not find container \"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a\": container with ID starting with b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a not found: ID does not exist" Mar 08 03:26:05.993801 master-0 kubenswrapper[7387]: I0308 03:26:05.993551 7387 scope.go:117] "RemoveContainer" containerID="2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" Mar 08 03:26:05.994376 master-0 kubenswrapper[7387]: E0308 03:26:05.994285 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429\": container with ID starting with 2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429 not found: ID does not exist" containerID="2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" Mar 08 03:26:05.994535 master-0 kubenswrapper[7387]: I0308 03:26:05.994402 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429"} err="failed to get container status \"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429\": rpc error: code = NotFound desc = could not find container \"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429\": container with ID starting with 2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429 not found: ID does not exist" Mar 08 03:26:05.994535 master-0 kubenswrapper[7387]: I0308 03:26:05.994493 7387 scope.go:117] "RemoveContainer" containerID="2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9" Mar 08 03:26:05.995380 master-0 kubenswrapper[7387]: E0308 03:26:05.995314 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9\": container with ID starting with 2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9 not found: ID does not exist" containerID="2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9" Mar 08 03:26:05.995380 master-0 kubenswrapper[7387]: I0308 03:26:05.995364 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9"} err="failed to get container status \"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9\": rpc error: code = NotFound desc = could not find container \"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9\": container with ID starting with 2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9 not found: ID does not exist" Mar 08 03:26:05.995625 master-0 kubenswrapper[7387]: I0308 03:26:05.995394 7387 scope.go:117] "RemoveContainer" containerID="ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee" Mar 08 03:26:05.996346 master-0 kubenswrapper[7387]: E0308 03:26:05.996247 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee\": container with ID starting with ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee not found: ID does not exist" containerID="ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee" Mar 08 03:26:05.996516 master-0 kubenswrapper[7387]: I0308 03:26:05.996330 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee"} err="failed to get container status \"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee\": rpc error: code = NotFound desc = could not find container \"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee\": container with ID starting with ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee not found: ID does not exist" Mar 08 03:26:05.996516 master-0 kubenswrapper[7387]: I0308 03:26:05.996423 7387 scope.go:117] "RemoveContainer" containerID="82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f" Mar 08 03:26:05.997158 master-0 kubenswrapper[7387]: E0308 03:26:05.997095 7387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f\": container with ID starting with 82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f not found: ID does not exist" containerID="82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f" Mar 08 03:26:05.997158 master-0 kubenswrapper[7387]: I0308 03:26:05.997147 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f"} err="failed to get container status \"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f\": rpc error: code = NotFound desc = could not find container \"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f\": container with ID starting with 82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f not found: ID does not exist" Mar 08 03:26:05.997423 master-0 kubenswrapper[7387]: I0308 03:26:05.997176 7387 scope.go:117] "RemoveContainer" containerID="5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11" Mar 08 03:26:05.998197 master-0 kubenswrapper[7387]: I0308 03:26:05.998090 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11"} err="failed to get container status \"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11\": rpc error: code = NotFound desc = could not find container \"5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11\": container with ID starting with 5f2d918ed65c54c86661b7dcec562a46483a207694d4f8bd8c866e26621dca11 not found: ID does not exist" Mar 08 03:26:05.998197 master-0 kubenswrapper[7387]: I0308 03:26:05.998171 7387 scope.go:117] "RemoveContainer" containerID="d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5" Mar 08 03:26:05.998769 master-0 kubenswrapper[7387]: I0308 03:26:05.998705 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5"} err="failed to get container status \"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5\": rpc error: code = NotFound desc = could not find container \"d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5\": container with ID starting with d43c128670bddbe13157be5c410fd92ee875166fc289159e03df7915d0b4a4b5 not found: ID does not exist" Mar 08 03:26:05.998769 master-0 kubenswrapper[7387]: I0308 03:26:05.998748 7387 scope.go:117] "RemoveContainer" containerID="e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509" Mar 08 03:26:05.999403 master-0 kubenswrapper[7387]: I0308 03:26:05.999333 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509"} err="failed to get container status \"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509\": rpc error: code = NotFound desc = could not find container \"e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509\": container with ID starting with e12c78daead84383caebe7336896e67a8f0e6a3ed9ea399e900316d1f1ebd509 not found: ID does not exist" Mar 08 03:26:05.999403 master-0 kubenswrapper[7387]: I0308 03:26:05.999383 7387 scope.go:117] "RemoveContainer" containerID="b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a" Mar 08 03:26:06.000015 master-0 kubenswrapper[7387]: I0308 03:26:05.999951 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a"} err="failed to get container status \"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a\": rpc error: code = NotFound desc = could not find container \"b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a\": container with ID starting with b0ddb055abb1b0b4b1b7109c72f5a73dd28b828d536a0eddb00d00cac4e3d31a not found: ID does not exist" Mar 08 03:26:06.000173 master-0 kubenswrapper[7387]: I0308 03:26:06.000046 7387 scope.go:117] "RemoveContainer" containerID="2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429" Mar 08 03:26:06.000767 master-0 kubenswrapper[7387]: I0308 03:26:06.000711 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429"} err="failed to get container status \"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429\": rpc error: code = NotFound desc = could not find container \"2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429\": container with ID starting with 2d0e4151dfd779023c2cdffa0f63f74dabd1568f864ae7ed089a95f39e140429 not found: ID does not exist" Mar 08 03:26:06.000767 master-0 kubenswrapper[7387]: I0308 03:26:06.000756 7387 scope.go:117] "RemoveContainer" containerID="2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9" Mar 08 03:26:06.001580 master-0 kubenswrapper[7387]: I0308 03:26:06.001437 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9"} err="failed to get container status \"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9\": rpc error: code = NotFound desc = could not find container \"2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9\": container with ID starting with 2acce355218fb4db709f8cd62c68924badb7990d91f8c47577e1cc6d989432b9 not found: ID does not exist" Mar 08 03:26:06.001759 master-0 kubenswrapper[7387]: I0308 03:26:06.001588 7387 scope.go:117] "RemoveContainer" containerID="ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee" Mar 08 03:26:06.002303 master-0 kubenswrapper[7387]: I0308 03:26:06.002252 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee"} err="failed to get container status \"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee\": rpc error: code = NotFound desc = could not find container \"ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee\": container with ID starting with ccac585e78a02166fe1d8053ba7b0fc4bb461435c208a87a1aaeb9d9552f95ee not found: ID does not exist" Mar 08 03:26:06.002303 master-0 kubenswrapper[7387]: I0308 03:26:06.002295 7387 scope.go:117] "RemoveContainer" containerID="82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f" Mar 08 03:26:06.002832 master-0 kubenswrapper[7387]: I0308 03:26:06.002762 7387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f"} err="failed to get container status \"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f\": rpc error: code = NotFound desc = could not find container \"82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f\": container with ID starting with 82f2a3373607d1947b3011dec302f94a1adb04d9790715e6842c60965770b27f not found: ID does not exist" Mar 08 03:26:06.599784 master-0 kubenswrapper[7387]: I0308 03:26:06.599719 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:06.599784 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:06.599784 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:06.599784 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:06.601116 master-0 kubenswrapper[7387]: I0308 03:26:06.599795 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:06.881319 master-0 kubenswrapper[7387]: I0308 03:26:06.881187 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:26:07.599378 master-0 kubenswrapper[7387]: I0308 03:26:07.599267 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:07.599378 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:07.599378 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:07.599378 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:07.599378 master-0 kubenswrapper[7387]: I0308 03:26:07.599365 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:08.599236 master-0 kubenswrapper[7387]: I0308 03:26:08.599144 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:08.599236 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:08.599236 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:08.599236 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:08.600361 master-0 kubenswrapper[7387]: I0308 03:26:08.599247 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:09.599973 master-0 kubenswrapper[7387]: I0308 03:26:09.599874 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:09.599973 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:09.599973 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:09.599973 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:09.601080 master-0 kubenswrapper[7387]: I0308 03:26:09.599996 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:10.599118 master-0 kubenswrapper[7387]: I0308 03:26:10.599013 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:10.599118 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:10.599118 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:10.599118 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:10.599118 master-0 kubenswrapper[7387]: I0308 03:26:10.599098 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:10.599746 master-0 kubenswrapper[7387]: I0308 03:26:10.599215 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:26:10.600429 master-0 kubenswrapper[7387]: I0308 03:26:10.600371 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3"} pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" containerMessage="Container router failed startup probe, will be restarted" Mar 08 03:26:10.601233 master-0 kubenswrapper[7387]: I0308 03:26:10.600450 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" containerID="cri-o://7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3" gracePeriod=3600 Mar 08 03:26:12.439788 master-0 kubenswrapper[7387]: E0308 03:26:12.439679 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:12.759771 master-0 kubenswrapper[7387]: I0308 03:26:12.759696 7387 scope.go:117] "RemoveContainer" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" Mar 08 03:26:12.760181 master-0 kubenswrapper[7387]: E0308 03:26:12.760113 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:26:13.070705 master-0 kubenswrapper[7387]: I0308 03:26:13.070561 7387 scope.go:117] "RemoveContainer" containerID="da5c0193c648331dfa0a6bd33ec4c599a059bf9e4842b26f52002f9bec9abbb4" Mar 08 03:26:13.696546 master-0 kubenswrapper[7387]: E0308 03:26:13.696240 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:17.760436 master-0 kubenswrapper[7387]: I0308 03:26:17.760330 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:26:17.761239 master-0 kubenswrapper[7387]: E0308 03:26:17.760878 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:26:18.759483 master-0 kubenswrapper[7387]: I0308 03:26:18.759391 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:26:18.785118 master-0 kubenswrapper[7387]: I0308 03:26:18.785072 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:26:18.785746 master-0 kubenswrapper[7387]: I0308 03:26:18.785718 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:26:20.924006 master-0 kubenswrapper[7387]: I0308 03:26:20.923841 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/1.log" Mar 08 03:26:20.924965 master-0 kubenswrapper[7387]: I0308 03:26:20.924886 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/0.log" Mar 08 03:26:20.925808 master-0 kubenswrapper[7387]: I0308 03:26:20.925750 7387 generic.go:334] "Generic (PLEG): container finished" podID="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" containerID="7ee5b861c39dc6b2389534ffbe109ec1e2487bbf38c2ab8f456f84e12449168e" exitCode=1 Mar 08 03:26:20.925888 master-0 kubenswrapper[7387]: I0308 03:26:20.925810 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerDied","Data":"7ee5b861c39dc6b2389534ffbe109ec1e2487bbf38c2ab8f456f84e12449168e"} Mar 08 03:26:20.925888 master-0 kubenswrapper[7387]: I0308 03:26:20.925863 7387 scope.go:117] "RemoveContainer" containerID="c5eec4110852b5b6f65ead45beeb23e454a4f0a36ca8d676067c0e98d6a8439c" Mar 08 03:26:20.926768 master-0 kubenswrapper[7387]: I0308 03:26:20.926709 7387 scope.go:117] "RemoveContainer" containerID="7ee5b861c39dc6b2389534ffbe109ec1e2487bbf38c2ab8f456f84e12449168e" Mar 08 03:26:21.937971 master-0 kubenswrapper[7387]: I0308 03:26:21.937867 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/1.log" Mar 08 03:26:21.938790 master-0 kubenswrapper[7387]: I0308 03:26:21.938529 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-ppdzb" event={"ID":"4fd323ae-11bf-4207-bdce-4d51a9c19dc3","Type":"ContainerStarted","Data":"512e784c1309eafed6e9816e950b961089d38106ac209f2477cd992ae67505ee"} Mar 08 03:26:22.440208 master-0 kubenswrapper[7387]: E0308 03:26:22.440134 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:23.620118 master-0 kubenswrapper[7387]: E0308 03:26:23.619947 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abf40add134a7 kube-system 8516 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:14:47 +0000 UTC,LastTimestamp:2026-03-08 03:25:49.615848367 +0000 UTC m=+886.010324088,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:26:23.697236 master-0 kubenswrapper[7387]: E0308 03:26:23.697093 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:26.759872 master-0 kubenswrapper[7387]: I0308 03:26:26.759798 7387 scope.go:117] "RemoveContainer" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" Mar 08 03:26:26.760484 master-0 kubenswrapper[7387]: E0308 03:26:26.760313 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:26:29.760082 master-0 kubenswrapper[7387]: I0308 03:26:29.759894 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:26:29.760876 master-0 kubenswrapper[7387]: E0308 03:26:29.760207 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:26:32.441112 master-0 kubenswrapper[7387]: E0308 03:26:32.441023 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:32.441112 master-0 kubenswrapper[7387]: E0308 03:26:32.441081 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:26:33.698180 master-0 kubenswrapper[7387]: E0308 03:26:33.698078 7387 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:33.698180 master-0 kubenswrapper[7387]: I0308 03:26:33.698170 7387 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 03:26:35.471096 master-0 kubenswrapper[7387]: I0308 03:26:35.470991 7387 status_manager.go:851] "Failed to get status for pod" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 08 03:26:42.760350 master-0 kubenswrapper[7387]: I0308 03:26:42.760275 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:26:42.761364 master-0 kubenswrapper[7387]: I0308 03:26:42.760424 7387 scope.go:117] "RemoveContainer" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" Mar 08 03:26:42.761364 master-0 kubenswrapper[7387]: E0308 03:26:42.760721 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:26:43.098882 master-0 kubenswrapper[7387]: I0308 03:26:43.098769 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/3.log" Mar 08 03:26:43.099983 master-0 kubenswrapper[7387]: I0308 03:26:43.099399 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4"} Mar 08 03:26:43.699463 master-0 kubenswrapper[7387]: E0308 03:26:43.699369 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 08 03:26:52.644221 master-0 kubenswrapper[7387]: E0308 03:26:52.644068 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:26:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:26:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:26:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:26:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:26:52.789084 master-0 kubenswrapper[7387]: E0308 03:26:52.788834 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:26:52.789997 master-0 kubenswrapper[7387]: I0308 03:26:52.789894 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 08 03:26:52.823143 master-0 kubenswrapper[7387]: W0308 03:26:52.823038 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5 WatchSource:0}: Error finding container 9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5: Status 404 returned error can't find the container with id 9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5 Mar 08 03:26:53.181101 master-0 kubenswrapper[7387]: I0308 03:26:53.181016 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5"} Mar 08 03:26:53.900949 master-0 kubenswrapper[7387]: E0308 03:26:53.900780 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 08 03:26:54.189629 master-0 kubenswrapper[7387]: I0308 03:26:54.189439 7387 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="ec3ad0a8cb7c4967a852ed5f49ded9e632a837d89e4681c433e054f6efc7dd8c" exitCode=0 Mar 08 03:26:54.189629 master-0 kubenswrapper[7387]: I0308 03:26:54.189527 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"ec3ad0a8cb7c4967a852ed5f49ded9e632a837d89e4681c433e054f6efc7dd8c"} Mar 08 03:26:54.190034 master-0 kubenswrapper[7387]: I0308 03:26:54.189966 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:26:54.190034 master-0 kubenswrapper[7387]: I0308 03:26:54.190005 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:26:54.760434 master-0 kubenswrapper[7387]: I0308 03:26:54.760379 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:26:54.760769 master-0 kubenswrapper[7387]: E0308 03:26:54.760719 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:26:57.212832 master-0 kubenswrapper[7387]: I0308 03:26:57.212704 7387 generic.go:334] "Generic (PLEG): container finished" podID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerID="7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3" exitCode=0 Mar 08 03:26:57.212832 master-0 kubenswrapper[7387]: I0308 03:26:57.212756 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerDied","Data":"7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3"} Mar 08 03:26:57.212832 master-0 kubenswrapper[7387]: I0308 03:26:57.212784 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerStarted","Data":"1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5"} Mar 08 03:26:57.596527 master-0 kubenswrapper[7387]: I0308 03:26:57.596417 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:26:57.600426 master-0 kubenswrapper[7387]: I0308 03:26:57.600352 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:57.600426 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:57.600426 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:57.600426 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:57.600899 master-0 kubenswrapper[7387]: I0308 03:26:57.600431 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:57.623598 master-0 kubenswrapper[7387]: E0308 03:26:57.623390 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189abf40add134a7 kube-system 8516 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:14:47 +0000 UTC,LastTimestamp:2026-03-08 03:25:51.573461876 +0000 UTC m=+887.967937597,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:26:58.596624 master-0 kubenswrapper[7387]: I0308 03:26:58.596493 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:26:58.599264 master-0 kubenswrapper[7387]: I0308 03:26:58.599211 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:58.599264 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:58.599264 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:58.599264 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:58.599571 master-0 kubenswrapper[7387]: I0308 03:26:58.599280 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:26:59.600039 master-0 kubenswrapper[7387]: I0308 03:26:59.599966 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:26:59.600039 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:26:59.600039 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:26:59.600039 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:26:59.601166 master-0 kubenswrapper[7387]: I0308 03:26:59.601082 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:00.600215 master-0 kubenswrapper[7387]: I0308 03:27:00.600130 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:00.600215 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:00.600215 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:00.600215 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:00.601217 master-0 kubenswrapper[7387]: I0308 03:27:00.600224 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:01.599362 master-0 kubenswrapper[7387]: I0308 03:27:01.599284 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:01.599362 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:01.599362 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:01.599362 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:01.599791 master-0 kubenswrapper[7387]: I0308 03:27:01.599379 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:02.600373 master-0 kubenswrapper[7387]: I0308 03:27:02.600283 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:02.600373 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:02.600373 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:02.600373 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:02.601400 master-0 kubenswrapper[7387]: I0308 03:27:02.600394 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:02.645104 master-0 kubenswrapper[7387]: E0308 03:27:02.645017 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:27:03.599872 master-0 kubenswrapper[7387]: I0308 03:27:03.599757 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:03.599872 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:03.599872 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:03.599872 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:03.600313 master-0 kubenswrapper[7387]: I0308 03:27:03.599980 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:04.301815 master-0 kubenswrapper[7387]: E0308 03:27:04.301700 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 08 03:27:04.600169 master-0 kubenswrapper[7387]: I0308 03:27:04.599930 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:04.600169 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:04.600169 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:04.600169 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:04.600169 master-0 kubenswrapper[7387]: I0308 03:27:04.600111 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:05.600506 master-0 kubenswrapper[7387]: I0308 03:27:05.600416 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:05.600506 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:05.600506 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:05.600506 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:05.600506 master-0 kubenswrapper[7387]: I0308 03:27:05.600506 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:05.760134 master-0 kubenswrapper[7387]: I0308 03:27:05.760041 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:27:05.760514 master-0 kubenswrapper[7387]: E0308 03:27:05.760462 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:27:06.599964 master-0 kubenswrapper[7387]: I0308 03:27:06.599857 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:06.599964 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:06.599964 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:06.599964 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:06.600425 master-0 kubenswrapper[7387]: I0308 03:27:06.600006 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:07.600056 master-0 kubenswrapper[7387]: I0308 03:27:07.599953 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:07.600056 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:07.600056 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:07.600056 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:07.601253 master-0 kubenswrapper[7387]: I0308 03:27:07.600063 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:08.599674 master-0 kubenswrapper[7387]: I0308 03:27:08.599562 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:08.599674 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:08.599674 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:08.599674 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:08.599965 master-0 kubenswrapper[7387]: I0308 03:27:08.599737 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:09.600980 master-0 kubenswrapper[7387]: I0308 03:27:09.600884 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:09.600980 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:09.600980 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:09.600980 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:09.602132 master-0 kubenswrapper[7387]: I0308 03:27:09.601012 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:10.599440 master-0 kubenswrapper[7387]: I0308 03:27:10.599332 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:10.599440 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:10.599440 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:10.599440 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:10.599859 master-0 kubenswrapper[7387]: I0308 03:27:10.599466 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:11.600158 master-0 kubenswrapper[7387]: I0308 03:27:11.600068 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:11.600158 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:11.600158 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:11.600158 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:11.601176 master-0 kubenswrapper[7387]: I0308 03:27:11.600181 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:12.601098 master-0 kubenswrapper[7387]: I0308 03:27:12.601020 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:12.601098 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:12.601098 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:12.601098 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:12.602097 master-0 kubenswrapper[7387]: I0308 03:27:12.601111 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:12.646462 master-0 kubenswrapper[7387]: E0308 03:27:12.646356 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 08 03:27:13.612616 master-0 kubenswrapper[7387]: I0308 03:27:13.600426 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:13.612616 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:13.612616 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:13.612616 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:13.612616 master-0 kubenswrapper[7387]: I0308 03:27:13.600519 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:14.600612 master-0 kubenswrapper[7387]: I0308 03:27:14.600524 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:14.600612 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:14.600612 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:14.600612 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:14.601251 master-0 kubenswrapper[7387]: I0308 03:27:14.600654 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:15.103722 master-0 kubenswrapper[7387]: E0308 03:27:15.103545 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 08 03:27:15.600254 master-0 kubenswrapper[7387]: I0308 03:27:15.600167 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:15.600254 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:15.600254 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:15.600254 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:15.600736 master-0 kubenswrapper[7387]: I0308 03:27:15.600256 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:16.600209 master-0 kubenswrapper[7387]: I0308 03:27:16.600098 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:16.600209 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:16.600209 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:16.600209 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:16.601267 master-0 kubenswrapper[7387]: I0308 03:27:16.600192 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:17.600942 master-0 kubenswrapper[7387]: I0308 03:27:17.600826 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:17.600942 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:17.600942 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:17.600942 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:17.601868 master-0 kubenswrapper[7387]: I0308 03:27:17.600952 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:18.600193 master-0 kubenswrapper[7387]: I0308 03:27:18.600101 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:18.600193 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:18.600193 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:18.600193 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:18.600646 master-0 kubenswrapper[7387]: I0308 03:27:18.600213 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:19.601375 master-0 kubenswrapper[7387]: I0308 03:27:19.601277 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:19.601375 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:19.601375 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:19.601375 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:19.602303 master-0 kubenswrapper[7387]: I0308 03:27:19.601384 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:20.600267 master-0 kubenswrapper[7387]: I0308 03:27:20.600197 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:20.600267 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:20.600267 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:20.600267 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:20.600648 master-0 kubenswrapper[7387]: I0308 03:27:20.600271 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:20.760376 master-0 kubenswrapper[7387]: I0308 03:27:20.760271 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:27:20.761350 master-0 kubenswrapper[7387]: E0308 03:27:20.760673 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:27:21.468488 master-0 kubenswrapper[7387]: I0308 03:27:21.468382 7387 generic.go:334] "Generic (PLEG): container finished" podID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerID="0c7ee191b0d761ce93be93342e9e3606726dcf3941ed2cb569025a1100bcd65c" exitCode=0 Mar 08 03:27:21.468488 master-0 kubenswrapper[7387]: I0308 03:27:21.468451 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerDied","Data":"0c7ee191b0d761ce93be93342e9e3606726dcf3941ed2cb569025a1100bcd65c"} Mar 08 03:27:21.468488 master-0 kubenswrapper[7387]: I0308 03:27:21.468500 7387 scope.go:117] "RemoveContainer" containerID="207b42b97b0cc7b2a3b3fe717f857e83a1274408fc29faf61812a15be3fc5f86" Mar 08 03:27:21.469609 master-0 kubenswrapper[7387]: I0308 03:27:21.469549 7387 scope.go:117] "RemoveContainer" containerID="0c7ee191b0d761ce93be93342e9e3606726dcf3941ed2cb569025a1100bcd65c" Mar 08 03:27:21.600422 master-0 kubenswrapper[7387]: I0308 03:27:21.600364 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:21.600422 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:21.600422 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:21.600422 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:21.600422 master-0 kubenswrapper[7387]: I0308 03:27:21.600418 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:22.481398 master-0 kubenswrapper[7387]: I0308 03:27:22.481342 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" event={"ID":"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6","Type":"ContainerStarted","Data":"0a3078c8133bfe672b3d28956dd312b799a7f420a940727d4a27a29719dfdf67"} Mar 08 03:27:22.482606 master-0 kubenswrapper[7387]: I0308 03:27:22.482526 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:27:22.486939 master-0 kubenswrapper[7387]: I0308 03:27:22.486852 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:27:22.600337 master-0 kubenswrapper[7387]: I0308 03:27:22.600220 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:22.600337 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:22.600337 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:22.600337 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:22.600766 master-0 kubenswrapper[7387]: I0308 03:27:22.600341 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:22.647091 master-0 kubenswrapper[7387]: E0308 03:27:22.646976 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:27:23.600338 master-0 kubenswrapper[7387]: I0308 03:27:23.600230 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:23.600338 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:23.600338 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:23.600338 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:23.601101 master-0 kubenswrapper[7387]: I0308 03:27:23.600345 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:24.599459 master-0 kubenswrapper[7387]: I0308 03:27:24.599343 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:24.599459 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:24.599459 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:24.599459 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:24.599459 master-0 kubenswrapper[7387]: I0308 03:27:24.599448 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:25.604638 master-0 kubenswrapper[7387]: I0308 03:27:25.604553 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:25.604638 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:25.604638 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:25.604638 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:25.605722 master-0 kubenswrapper[7387]: I0308 03:27:25.604638 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:26.599806 master-0 kubenswrapper[7387]: I0308 03:27:26.599691 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:26.599806 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:26.599806 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:26.599806 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:26.599806 master-0 kubenswrapper[7387]: I0308 03:27:26.599773 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:26.705554 master-0 kubenswrapper[7387]: E0308 03:27:26.705155 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 08 03:27:27.599422 master-0 kubenswrapper[7387]: I0308 03:27:27.599309 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:27.599422 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:27.599422 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:27.599422 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:27.599422 master-0 kubenswrapper[7387]: I0308 03:27:27.599391 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:28.193162 master-0 kubenswrapper[7387]: E0308 03:27:28.193067 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:27:28.600091 master-0 kubenswrapper[7387]: I0308 03:27:28.600014 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:28.600091 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:28.600091 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:28.600091 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:28.600548 master-0 kubenswrapper[7387]: I0308 03:27:28.600108 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:29.537597 master-0 kubenswrapper[7387]: I0308 03:27:29.537510 7387 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="30c975c18b67e45ff1d2f959009eed3f5b14395b49fcf6b6934c0641639a5191" exitCode=0 Mar 08 03:27:29.538461 master-0 kubenswrapper[7387]: I0308 03:27:29.537653 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"30c975c18b67e45ff1d2f959009eed3f5b14395b49fcf6b6934c0641639a5191"} Mar 08 03:27:29.538461 master-0 kubenswrapper[7387]: I0308 03:27:29.538008 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:27:29.538461 master-0 kubenswrapper[7387]: I0308 03:27:29.538042 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:27:29.541010 master-0 kubenswrapper[7387]: I0308 03:27:29.540955 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/4.log" Mar 08 03:27:29.542043 master-0 kubenswrapper[7387]: I0308 03:27:29.541982 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/3.log" Mar 08 03:27:29.542188 master-0 kubenswrapper[7387]: I0308 03:27:29.542069 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" exitCode=1 Mar 08 03:27:29.542264 master-0 kubenswrapper[7387]: I0308 03:27:29.542175 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3"} Mar 08 03:27:29.542331 master-0 kubenswrapper[7387]: I0308 03:27:29.542285 7387 scope.go:117] "RemoveContainer" containerID="5d5ab4a36feb6e5428f4fe82fd02d1bf53851b6363e11c4e53ba7fc20e220f93" Mar 08 03:27:29.543178 master-0 kubenswrapper[7387]: I0308 03:27:29.543119 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:27:29.543544 master-0 kubenswrapper[7387]: E0308 03:27:29.543483 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:27:29.546816 master-0 kubenswrapper[7387]: I0308 03:27:29.546746 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/1.log" Mar 08 03:27:29.547824 master-0 kubenswrapper[7387]: I0308 03:27:29.547760 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/0.log" Mar 08 03:27:29.548452 master-0 kubenswrapper[7387]: I0308 03:27:29.548383 7387 generic.go:334] "Generic (PLEG): container finished" podID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerID="d67b7c07c51ae55685846daed44be4e4bc31d9601f7c2247d08f667ff264cd33" exitCode=1 Mar 08 03:27:29.548557 master-0 kubenswrapper[7387]: I0308 03:27:29.548430 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerDied","Data":"d67b7c07c51ae55685846daed44be4e4bc31d9601f7c2247d08f667ff264cd33"} Mar 08 03:27:29.549676 master-0 kubenswrapper[7387]: I0308 03:27:29.549616 7387 scope.go:117] "RemoveContainer" containerID="d67b7c07c51ae55685846daed44be4e4bc31d9601f7c2247d08f667ff264cd33" Mar 08 03:27:29.578218 master-0 kubenswrapper[7387]: I0308 03:27:29.578131 7387 scope.go:117] "RemoveContainer" containerID="847ec71b717fbc403d7670e2fb6fcb0eb16c5961bfffd67ba80ebb137144703d" Mar 08 03:27:29.599517 master-0 kubenswrapper[7387]: I0308 03:27:29.599443 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:29.599517 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:29.599517 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:29.599517 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:29.599847 master-0 kubenswrapper[7387]: I0308 03:27:29.599536 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:30.561002 master-0 kubenswrapper[7387]: I0308 03:27:30.560898 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/4.log" Mar 08 03:27:30.565271 master-0 kubenswrapper[7387]: I0308 03:27:30.565211 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/1.log" Mar 08 03:27:30.566057 master-0 kubenswrapper[7387]: I0308 03:27:30.565800 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" event={"ID":"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b","Type":"ContainerStarted","Data":"2fd20a2f23cfb73dda72a15dcdc73615f7bb3032c3907696dc74dd7b9b0a6582"} Mar 08 03:27:30.566217 master-0 kubenswrapper[7387]: I0308 03:27:30.566152 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:27:30.599500 master-0 kubenswrapper[7387]: I0308 03:27:30.599430 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:30.599500 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:30.599500 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:30.599500 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:30.599817 master-0 kubenswrapper[7387]: I0308 03:27:30.599498 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:31.600279 master-0 kubenswrapper[7387]: I0308 03:27:31.600182 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:31.600279 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:31.600279 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:31.600279 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:31.601350 master-0 kubenswrapper[7387]: I0308 03:27:31.600306 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:31.628173 master-0 kubenswrapper[7387]: E0308 03:27:31.627951 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-677db989d6-4bpl8.189abf9a9d599e9d openshift-ingress-operator 10097 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-677db989d6-4bpl8,UID:197afe92-5912-4e90-a477-e3abe001bbc7,APIVersion:v1,ResourceVersion:3636,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:21:14 +0000 UTC,LastTimestamp:2026-03-08 03:25:57.700042234 +0000 UTC m=+894.094517945,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:27:31.759848 master-0 kubenswrapper[7387]: I0308 03:27:31.759745 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:27:31.760275 master-0 kubenswrapper[7387]: E0308 03:27:31.760213 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:27:32.599930 master-0 kubenswrapper[7387]: I0308 03:27:32.599833 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:32.599930 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:32.599930 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:32.599930 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:32.600329 master-0 kubenswrapper[7387]: I0308 03:27:32.599946 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:32.647362 master-0 kubenswrapper[7387]: E0308 03:27:32.647277 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 03:27:32.647362 master-0 kubenswrapper[7387]: E0308 03:27:32.647327 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:27:33.600012 master-0 kubenswrapper[7387]: I0308 03:27:33.599871 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:33.600012 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:33.600012 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:33.600012 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:33.601031 master-0 kubenswrapper[7387]: I0308 03:27:33.599995 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:34.599780 master-0 kubenswrapper[7387]: I0308 03:27:34.599700 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:34.599780 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:34.599780 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:34.599780 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:34.600142 master-0 kubenswrapper[7387]: I0308 03:27:34.599806 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:35.473324 master-0 kubenswrapper[7387]: I0308 03:27:35.473239 7387 status_manager.go:851] "Failed to get status for pod" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" pod="openshift-etcd/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 08 03:27:35.599768 master-0 kubenswrapper[7387]: I0308 03:27:35.599695 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:35.599768 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:35.599768 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:35.599768 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:35.600244 master-0 kubenswrapper[7387]: I0308 03:27:35.600174 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:35.611217 master-0 kubenswrapper[7387]: I0308 03:27:35.611194 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/1.log" Mar 08 03:27:35.612750 master-0 kubenswrapper[7387]: I0308 03:27:35.612730 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/0.log" Mar 08 03:27:35.612930 master-0 kubenswrapper[7387]: I0308 03:27:35.612868 7387 generic.go:334] "Generic (PLEG): container finished" podID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerID="1341190aa2856a973f485203a951081b82fd1c38dd7ccb12a11db05205beefcc" exitCode=1 Mar 08 03:27:35.613060 master-0 kubenswrapper[7387]: I0308 03:27:35.613006 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerDied","Data":"1341190aa2856a973f485203a951081b82fd1c38dd7ccb12a11db05205beefcc"} Mar 08 03:27:35.613126 master-0 kubenswrapper[7387]: I0308 03:27:35.613092 7387 scope.go:117] "RemoveContainer" containerID="a8f3f14f501b72ff362550257f13a332eecf70ec4f446aeb3d199baf5fd9fcca" Mar 08 03:27:35.613930 master-0 kubenswrapper[7387]: I0308 03:27:35.613866 7387 scope.go:117] "RemoveContainer" containerID="1341190aa2856a973f485203a951081b82fd1c38dd7ccb12a11db05205beefcc" Mar 08 03:27:35.778423 master-0 kubenswrapper[7387]: I0308 03:27:35.778379 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:27:35.786778 master-0 kubenswrapper[7387]: I0308 03:27:35.786718 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:27:35.787000 master-0 kubenswrapper[7387]: I0308 03:27:35.786798 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:27:36.599363 master-0 kubenswrapper[7387]: I0308 03:27:36.599287 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:36.599363 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:36.599363 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:36.599363 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:36.600308 master-0 kubenswrapper[7387]: I0308 03:27:36.599383 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:36.624870 master-0 kubenswrapper[7387]: I0308 03:27:36.624807 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/1.log" Mar 08 03:27:36.625541 master-0 kubenswrapper[7387]: I0308 03:27:36.625497 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" event={"ID":"399c5025-da66-4c52-8e68-ea6c996d9cc8","Type":"ContainerStarted","Data":"3e4a81748d28070680cdfee2a86d59bdb20023bc6f5b2ddbaba9fe77904077f6"} Mar 08 03:27:36.625763 master-0 kubenswrapper[7387]: I0308 03:27:36.625715 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:27:37.599254 master-0 kubenswrapper[7387]: I0308 03:27:37.599200 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:37.599254 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:37.599254 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:37.599254 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:37.600817 master-0 kubenswrapper[7387]: I0308 03:27:37.600773 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:38.599795 master-0 kubenswrapper[7387]: I0308 03:27:38.599727 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:38.599795 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:38.599795 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:38.599795 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:38.600754 master-0 kubenswrapper[7387]: I0308 03:27:38.599808 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:39.600169 master-0 kubenswrapper[7387]: I0308 03:27:39.600111 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:39.600169 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:39.600169 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:39.600169 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:39.601312 master-0 kubenswrapper[7387]: I0308 03:27:39.601255 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:39.912054 master-0 kubenswrapper[7387]: E0308 03:27:39.910223 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 08 03:27:40.599544 master-0 kubenswrapper[7387]: I0308 03:27:40.599447 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:40.599544 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:40.599544 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:40.599544 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:40.599544 master-0 kubenswrapper[7387]: I0308 03:27:40.599524 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:40.760589 master-0 kubenswrapper[7387]: I0308 03:27:40.760501 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:27:40.761360 master-0 kubenswrapper[7387]: E0308 03:27:40.760821 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:27:41.598930 master-0 kubenswrapper[7387]: I0308 03:27:41.598842 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:41.598930 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:41.598930 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:41.598930 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:41.598930 master-0 kubenswrapper[7387]: I0308 03:27:41.598931 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:42.600253 master-0 kubenswrapper[7387]: I0308 03:27:42.600129 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:42.600253 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:42.600253 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:42.600253 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:42.600253 master-0 kubenswrapper[7387]: I0308 03:27:42.600233 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:43.600152 master-0 kubenswrapper[7387]: I0308 03:27:43.600057 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:43.600152 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:43.600152 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:43.600152 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:43.600152 master-0 kubenswrapper[7387]: I0308 03:27:43.600144 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:43.760269 master-0 kubenswrapper[7387]: I0308 03:27:43.760185 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:27:43.760675 master-0 kubenswrapper[7387]: E0308 03:27:43.760631 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:27:44.599543 master-0 kubenswrapper[7387]: I0308 03:27:44.599470 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:44.599543 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:44.599543 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:44.599543 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:44.600177 master-0 kubenswrapper[7387]: I0308 03:27:44.600131 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:45.599570 master-0 kubenswrapper[7387]: I0308 03:27:45.599447 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:45.599570 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:45.599570 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:45.599570 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:45.600616 master-0 kubenswrapper[7387]: I0308 03:27:45.599586 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:45.789534 master-0 kubenswrapper[7387]: I0308 03:27:45.789403 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:27:46.599731 master-0 kubenswrapper[7387]: I0308 03:27:46.599661 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:46.599731 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:46.599731 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:46.599731 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:46.600591 master-0 kubenswrapper[7387]: I0308 03:27:46.599756 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:47.599889 master-0 kubenswrapper[7387]: I0308 03:27:47.599807 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:47.599889 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:47.599889 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:47.599889 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:47.601178 master-0 kubenswrapper[7387]: I0308 03:27:47.601119 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:48.600636 master-0 kubenswrapper[7387]: I0308 03:27:48.600529 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:48.600636 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:48.600636 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:48.600636 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:48.601546 master-0 kubenswrapper[7387]: I0308 03:27:48.600651 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:49.599812 master-0 kubenswrapper[7387]: I0308 03:27:49.599717 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:49.599812 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:49.599812 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:49.599812 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:49.600384 master-0 kubenswrapper[7387]: I0308 03:27:49.599819 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:50.599992 master-0 kubenswrapper[7387]: I0308 03:27:50.599865 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:50.599992 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:50.599992 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:50.599992 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:50.600994 master-0 kubenswrapper[7387]: I0308 03:27:50.600004 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:51.600016 master-0 kubenswrapper[7387]: I0308 03:27:51.599874 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:51.600016 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:51.600016 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:51.600016 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:51.601019 master-0 kubenswrapper[7387]: I0308 03:27:51.600036 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:51.759959 master-0 kubenswrapper[7387]: I0308 03:27:51.759868 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:27:51.760296 master-0 kubenswrapper[7387]: E0308 03:27:51.760239 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:27:52.599117 master-0 kubenswrapper[7387]: I0308 03:27:52.599052 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:52.599117 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:52.599117 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:52.599117 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:52.599117 master-0 kubenswrapper[7387]: I0308 03:27:52.599113 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:52.719296 master-0 kubenswrapper[7387]: E0308 03:27:52.719039 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:27:42Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:27:42Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:27:42Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:27:42Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:27:53.599451 master-0 kubenswrapper[7387]: I0308 03:27:53.599387 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:53.599451 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:53.599451 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:53.599451 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:53.599871 master-0 kubenswrapper[7387]: I0308 03:27:53.599467 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:54.599774 master-0 kubenswrapper[7387]: I0308 03:27:54.599699 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:54.599774 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:54.599774 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:54.599774 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:54.601363 master-0 kubenswrapper[7387]: I0308 03:27:54.601305 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:55.600342 master-0 kubenswrapper[7387]: I0308 03:27:55.600243 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:55.600342 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:55.600342 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:55.600342 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:55.601441 master-0 kubenswrapper[7387]: I0308 03:27:55.600344 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:56.311312 master-0 kubenswrapper[7387]: E0308 03:27:56.310867 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:27:56.599683 master-0 kubenswrapper[7387]: I0308 03:27:56.599555 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:56.599683 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:56.599683 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:56.599683 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:56.599683 master-0 kubenswrapper[7387]: I0308 03:27:56.599630 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:56.759722 master-0 kubenswrapper[7387]: I0308 03:27:56.759640 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:27:56.760603 master-0 kubenswrapper[7387]: E0308 03:27:56.760078 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:27:57.599677 master-0 kubenswrapper[7387]: I0308 03:27:57.599580 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:57.599677 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:57.599677 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:57.599677 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:57.599677 master-0 kubenswrapper[7387]: I0308 03:27:57.599667 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:58.599423 master-0 kubenswrapper[7387]: I0308 03:27:58.599345 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:58.599423 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:58.599423 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:58.599423 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:58.600621 master-0 kubenswrapper[7387]: I0308 03:27:58.599425 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:59.600170 master-0 kubenswrapper[7387]: I0308 03:27:59.600084 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:27:59.600170 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:27:59.600170 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:27:59.600170 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:27:59.601203 master-0 kubenswrapper[7387]: I0308 03:27:59.600195 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:27:59.819702 master-0 kubenswrapper[7387]: I0308 03:27:59.819634 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-lssws_b537a655-ef73-40b5-b228-95ab6cfdedf2/machine-approver-controller/0.log" Mar 08 03:27:59.820705 master-0 kubenswrapper[7387]: I0308 03:27:59.820661 7387 generic.go:334] "Generic (PLEG): container finished" podID="b537a655-ef73-40b5-b228-95ab6cfdedf2" containerID="b2bf1f96c69abb910723e2ce05cf88ba62c29d23e19982dd55b5fdb8f01184e9" exitCode=255 Mar 08 03:27:59.820821 master-0 kubenswrapper[7387]: I0308 03:27:59.820745 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" event={"ID":"b537a655-ef73-40b5-b228-95ab6cfdedf2","Type":"ContainerDied","Data":"b2bf1f96c69abb910723e2ce05cf88ba62c29d23e19982dd55b5fdb8f01184e9"} Mar 08 03:27:59.821478 master-0 kubenswrapper[7387]: I0308 03:27:59.821425 7387 scope.go:117] "RemoveContainer" containerID="b2bf1f96c69abb910723e2ce05cf88ba62c29d23e19982dd55b5fdb8f01184e9" Mar 08 03:28:00.599251 master-0 kubenswrapper[7387]: I0308 03:28:00.599176 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:00.599251 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:00.599251 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:00.599251 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:00.599603 master-0 kubenswrapper[7387]: I0308 03:28:00.599274 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:00.832812 master-0 kubenswrapper[7387]: I0308 03:28:00.832701 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-lssws_b537a655-ef73-40b5-b228-95ab6cfdedf2/machine-approver-controller/0.log" Mar 08 03:28:00.833659 master-0 kubenswrapper[7387]: I0308 03:28:00.833502 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" event={"ID":"b537a655-ef73-40b5-b228-95ab6cfdedf2","Type":"ContainerStarted","Data":"6ee224d74fb7de8e8198edafc6068d987560efd453f671d6c7d78332dfd58558"} Mar 08 03:28:01.600431 master-0 kubenswrapper[7387]: I0308 03:28:01.600349 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:01.600431 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:01.600431 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:01.600431 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:01.600889 master-0 kubenswrapper[7387]: I0308 03:28:01.600458 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:01.846367 master-0 kubenswrapper[7387]: I0308 03:28:01.846298 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/0.log" Mar 08 03:28:01.847363 master-0 kubenswrapper[7387]: I0308 03:28:01.846375 7387 generic.go:334] "Generic (PLEG): container finished" podID="c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6" containerID="26407c3ca61b97ca6a5ab23516c6982614940f72f59b58cd3af72397aa976645" exitCode=1 Mar 08 03:28:01.847363 master-0 kubenswrapper[7387]: I0308 03:28:01.846414 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" event={"ID":"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6","Type":"ContainerDied","Data":"26407c3ca61b97ca6a5ab23516c6982614940f72f59b58cd3af72397aa976645"} Mar 08 03:28:01.847363 master-0 kubenswrapper[7387]: I0308 03:28:01.847241 7387 scope.go:117] "RemoveContainer" containerID="26407c3ca61b97ca6a5ab23516c6982614940f72f59b58cd3af72397aa976645" Mar 08 03:28:02.600463 master-0 kubenswrapper[7387]: I0308 03:28:02.600323 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:02.600463 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:02.600463 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:02.600463 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:02.600463 master-0 kubenswrapper[7387]: I0308 03:28:02.600463 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:02.720114 master-0 kubenswrapper[7387]: E0308 03:28:02.719999 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:02.760612 master-0 kubenswrapper[7387]: I0308 03:28:02.760548 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:28:02.760975 master-0 kubenswrapper[7387]: E0308 03:28:02.760900 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:28:02.858752 master-0 kubenswrapper[7387]: I0308 03:28:02.858619 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/0.log" Mar 08 03:28:02.859754 master-0 kubenswrapper[7387]: I0308 03:28:02.859704 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" event={"ID":"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6","Type":"ContainerStarted","Data":"28bacd2ede5b924353eca0c66a28f30796040a795bcf0e46420d9511c43a26ed"} Mar 08 03:28:03.541340 master-0 kubenswrapper[7387]: E0308 03:28:03.541298 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:03.606525 master-0 kubenswrapper[7387]: I0308 03:28:03.606419 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:03.606525 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:03.606525 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:03.606525 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:03.607102 master-0 kubenswrapper[7387]: I0308 03:28:03.606542 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:03.870052 master-0 kubenswrapper[7387]: I0308 03:28:03.869998 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/0.log" Mar 08 03:28:03.870845 master-0 kubenswrapper[7387]: I0308 03:28:03.870071 7387 generic.go:334] "Generic (PLEG): container finished" podID="45212ce7-5f95-402e-93c4-83bac844f77d" containerID="1bc524d4935db97fb50be5674147f8f9cecf357fca9acfe424caa68101eaec3d" exitCode=1 Mar 08 03:28:03.870845 master-0 kubenswrapper[7387]: I0308 03:28:03.870145 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerDied","Data":"1bc524d4935db97fb50be5674147f8f9cecf357fca9acfe424caa68101eaec3d"} Mar 08 03:28:03.870845 master-0 kubenswrapper[7387]: I0308 03:28:03.870710 7387 scope.go:117] "RemoveContainer" containerID="1bc524d4935db97fb50be5674147f8f9cecf357fca9acfe424caa68101eaec3d" Mar 08 03:28:03.875853 master-0 kubenswrapper[7387]: I0308 03:28:03.875779 7387 generic.go:334] "Generic (PLEG): container finished" podID="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" containerID="3c9001c002bea8ae81641c5d4b6e3f763d09a9b2d453bd324d0fd602cf7b8d18" exitCode=0 Mar 08 03:28:03.876028 master-0 kubenswrapper[7387]: I0308 03:28:03.875857 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerDied","Data":"3c9001c002bea8ae81641c5d4b6e3f763d09a9b2d453bd324d0fd602cf7b8d18"} Mar 08 03:28:03.876028 master-0 kubenswrapper[7387]: I0308 03:28:03.875961 7387 scope.go:117] "RemoveContainer" containerID="ae6eee5afe5e46fa6bdda2c614fc3054391ae41ef6fbf435d604af42a3bf8ed4" Mar 08 03:28:03.877103 master-0 kubenswrapper[7387]: I0308 03:28:03.877028 7387 scope.go:117] "RemoveContainer" containerID="3c9001c002bea8ae81641c5d4b6e3f763d09a9b2d453bd324d0fd602cf7b8d18" Mar 08 03:28:04.599775 master-0 kubenswrapper[7387]: I0308 03:28:04.599726 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:04.599775 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:04.599775 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:04.599775 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:04.600405 master-0 kubenswrapper[7387]: I0308 03:28:04.600355 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:04.892150 master-0 kubenswrapper[7387]: I0308 03:28:04.892092 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/0.log" Mar 08 03:28:04.893614 master-0 kubenswrapper[7387]: I0308 03:28:04.892253 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerStarted","Data":"1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3"} Mar 08 03:28:04.896644 master-0 kubenswrapper[7387]: I0308 03:28:04.896592 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" event={"ID":"631b3a8e-43e0-4818-b6e1-bd61ac531ab6","Type":"ContainerStarted","Data":"a9522050139579d5d295dbd2cd1db2f3e8c650499f7145cd677c9824f34b8f8c"} Mar 08 03:28:04.900028 master-0 kubenswrapper[7387]: I0308 03:28:04.899975 7387 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="544467ed5f69544193975fd6c79144f61384cc33dfea4931ad4d22fe98a678ac" exitCode=0 Mar 08 03:28:04.900151 master-0 kubenswrapper[7387]: I0308 03:28:04.900079 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"544467ed5f69544193975fd6c79144f61384cc33dfea4931ad4d22fe98a678ac"} Mar 08 03:28:04.900466 master-0 kubenswrapper[7387]: I0308 03:28:04.900429 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:28:04.900576 master-0 kubenswrapper[7387]: I0308 03:28:04.900467 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:28:04.903024 master-0 kubenswrapper[7387]: I0308 03:28:04.902898 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/1.log" Mar 08 03:28:04.903965 master-0 kubenswrapper[7387]: I0308 03:28:04.903891 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/0.log" Mar 08 03:28:04.904085 master-0 kubenswrapper[7387]: I0308 03:28:04.904002 7387 generic.go:334] "Generic (PLEG): container finished" podID="103158c5-c99f-4224-bf5a-e23b1aaf9172" containerID="7828a0e0fa2706d250ad69378649c5fb641ba621ee124550bb4757af01298f2e" exitCode=1 Mar 08 03:28:04.904439 master-0 kubenswrapper[7387]: I0308 03:28:04.904091 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerDied","Data":"7828a0e0fa2706d250ad69378649c5fb641ba621ee124550bb4757af01298f2e"} Mar 08 03:28:04.904439 master-0 kubenswrapper[7387]: I0308 03:28:04.904132 7387 scope.go:117] "RemoveContainer" containerID="a90adc87011fbb7cd1968febcefc0ce682e90d9df30e52bef5969b7cab457d60" Mar 08 03:28:04.905082 master-0 kubenswrapper[7387]: I0308 03:28:04.905020 7387 scope.go:117] "RemoveContainer" containerID="7828a0e0fa2706d250ad69378649c5fb641ba621ee124550bb4757af01298f2e" Mar 08 03:28:04.907158 master-0 kubenswrapper[7387]: I0308 03:28:04.907119 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:28:04.908215 master-0 kubenswrapper[7387]: I0308 03:28:04.908168 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/0.log" Mar 08 03:28:04.909173 master-0 kubenswrapper[7387]: I0308 03:28:04.909127 7387 generic.go:334] "Generic (PLEG): container finished" podID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" exitCode=1 Mar 08 03:28:04.909592 master-0 kubenswrapper[7387]: I0308 03:28:04.909203 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerDied","Data":"c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a"} Mar 08 03:28:04.910305 master-0 kubenswrapper[7387]: I0308 03:28:04.910209 7387 scope.go:117] "RemoveContainer" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" Mar 08 03:28:04.910610 master-0 kubenswrapper[7387]: E0308 03:28:04.910565 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-854648ff6d-8qznw_openshift-operator-lifecycle-manager(f8711b9f-3d18-4b8d-a263-2c9af9dc68a6)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" Mar 08 03:28:04.938586 master-0 kubenswrapper[7387]: I0308 03:28:04.938528 7387 scope.go:117] "RemoveContainer" containerID="61085a1c0f60df971fea9a09a95423c547ccb46d0bf74149a0614fd843a50e98" Mar 08 03:28:05.600000 master-0 kubenswrapper[7387]: I0308 03:28:05.599886 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:05.600000 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:05.600000 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:05.600000 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:05.600453 master-0 kubenswrapper[7387]: I0308 03:28:05.600019 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:05.635135 master-0 kubenswrapper[7387]: E0308 03:28:05.634943 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-677db989d6-4bpl8.189abf9a9d599e9d openshift-ingress-operator 10097 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-677db989d6-4bpl8,UID:197afe92-5912-4e90-a477-e3abe001bbc7,APIVersion:v1,ResourceVersion:3636,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:21:14 +0000 UTC,LastTimestamp:2026-03-08 03:26:12.760067959 +0000 UTC m=+909.154543670,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:28:05.920937 master-0 kubenswrapper[7387]: I0308 03:28:05.920836 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/1.log" Mar 08 03:28:05.921695 master-0 kubenswrapper[7387]: I0308 03:28:05.921016 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" event={"ID":"103158c5-c99f-4224-bf5a-e23b1aaf9172","Type":"ContainerStarted","Data":"18685db9572265514e0517ed0b2082c6e5f19329de62a5795a9ac23e2de124a7"} Mar 08 03:28:05.923576 master-0 kubenswrapper[7387]: I0308 03:28:05.923529 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:28:06.600362 master-0 kubenswrapper[7387]: I0308 03:28:06.600051 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:06.600362 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:06.600362 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:06.600362 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:06.600362 master-0 kubenswrapper[7387]: I0308 03:28:06.600161 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:06.848958 master-0 kubenswrapper[7387]: I0308 03:28:06.840337 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:28:06.848958 master-0 kubenswrapper[7387]: I0308 03:28:06.848798 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:28:06.849490 master-0 kubenswrapper[7387]: I0308 03:28:06.849434 7387 scope.go:117] "RemoveContainer" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" Mar 08 03:28:06.850176 master-0 kubenswrapper[7387]: E0308 03:28:06.850128 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-854648ff6d-8qznw_openshift-operator-lifecycle-manager(f8711b9f-3d18-4b8d-a263-2c9af9dc68a6)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" Mar 08 03:28:06.872275 master-0 kubenswrapper[7387]: I0308 03:28:06.872140 7387 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 08 03:28:06.872275 master-0 kubenswrapper[7387]: I0308 03:28:06.872184 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 08 03:28:06.872542 master-0 kubenswrapper[7387]: I0308 03:28:06.872419 7387 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Mar 08 03:28:06.872647 master-0 kubenswrapper[7387]: I0308 03:28:06.872486 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Mar 08 03:28:06.936337 master-0 kubenswrapper[7387]: I0308 03:28:06.936242 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:28:06.937150 master-0 kubenswrapper[7387]: I0308 03:28:06.937044 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" exitCode=1 Mar 08 03:28:06.937234 master-0 kubenswrapper[7387]: I0308 03:28:06.937161 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} Mar 08 03:28:06.937878 master-0 kubenswrapper[7387]: I0308 03:28:06.937827 7387 scope.go:117] "RemoveContainer" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" Mar 08 03:28:06.938031 master-0 kubenswrapper[7387]: I0308 03:28:06.937984 7387 scope.go:117] "RemoveContainer" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:28:06.938249 master-0 kubenswrapper[7387]: E0308 03:28:06.938196 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-854648ff6d-8qznw_openshift-operator-lifecycle-manager(f8711b9f-3d18-4b8d-a263-2c9af9dc68a6)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" podUID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" Mar 08 03:28:07.599291 master-0 kubenswrapper[7387]: I0308 03:28:07.599211 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:07.599291 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:07.599291 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:07.599291 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:07.599726 master-0 kubenswrapper[7387]: I0308 03:28:07.599310 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:07.953212 master-0 kubenswrapper[7387]: I0308 03:28:07.952978 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:28:07.955067 master-0 kubenswrapper[7387]: I0308 03:28:07.954123 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef"} Mar 08 03:28:07.955067 master-0 kubenswrapper[7387]: I0308 03:28:07.954516 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:28:08.599163 master-0 kubenswrapper[7387]: I0308 03:28:08.599086 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:08.599163 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:08.599163 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:08.599163 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:08.599575 master-0 kubenswrapper[7387]: I0308 03:28:08.599181 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:08.760637 master-0 kubenswrapper[7387]: I0308 03:28:08.760541 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:28:08.760998 master-0 kubenswrapper[7387]: E0308 03:28:08.760946 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:28:09.599612 master-0 kubenswrapper[7387]: I0308 03:28:09.599515 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:09.599612 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:09.599612 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:09.599612 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:09.600657 master-0 kubenswrapper[7387]: I0308 03:28:09.599614 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:10.600618 master-0 kubenswrapper[7387]: I0308 03:28:10.600538 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:10.600618 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:10.600618 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:10.600618 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:10.601302 master-0 kubenswrapper[7387]: I0308 03:28:10.600639 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:11.599927 master-0 kubenswrapper[7387]: I0308 03:28:11.599823 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:11.599927 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:11.599927 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:11.599927 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:11.600468 master-0 kubenswrapper[7387]: I0308 03:28:11.600006 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:12.600057 master-0 kubenswrapper[7387]: I0308 03:28:12.599939 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:12.600057 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:12.600057 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:12.600057 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:12.600057 master-0 kubenswrapper[7387]: I0308 03:28:12.600031 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:12.720507 master-0 kubenswrapper[7387]: E0308 03:28:12.720397 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:13.312791 master-0 kubenswrapper[7387]: E0308 03:28:13.312402 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:28:13.599800 master-0 kubenswrapper[7387]: I0308 03:28:13.599555 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:13.599800 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:13.599800 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:13.599800 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:13.599800 master-0 kubenswrapper[7387]: I0308 03:28:13.599654 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:13.760284 master-0 kubenswrapper[7387]: I0308 03:28:13.760191 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:28:13.760674 master-0 kubenswrapper[7387]: E0308 03:28:13.760603 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:28:14.600040 master-0 kubenswrapper[7387]: I0308 03:28:14.599946 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:14.600040 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:14.600040 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:14.600040 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:14.600040 master-0 kubenswrapper[7387]: I0308 03:28:14.600039 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:14.951978 master-0 kubenswrapper[7387]: I0308 03:28:14.951755 7387 patch_prober.go:28] interesting pod/controller-manager-75cd54f7f-2bg6l container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 08 03:28:14.951978 master-0 kubenswrapper[7387]: I0308 03:28:14.951852 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 08 03:28:14.951978 master-0 kubenswrapper[7387]: I0308 03:28:14.951754 7387 patch_prober.go:28] interesting pod/controller-manager-75cd54f7f-2bg6l container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Mar 08 03:28:14.952406 master-0 kubenswrapper[7387]: I0308 03:28:14.952041 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.75:8443/healthz\": dial tcp 10.128.0.75:8443: connect: connection refused" Mar 08 03:28:15.008948 master-0 kubenswrapper[7387]: I0308 03:28:15.008839 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" exitCode=0 Mar 08 03:28:15.008948 master-0 kubenswrapper[7387]: I0308 03:28:15.008949 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerDied","Data":"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6"} Mar 08 03:28:15.009827 master-0 kubenswrapper[7387]: I0308 03:28:15.009757 7387 scope.go:117] "RemoveContainer" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" Mar 08 03:28:15.599975 master-0 kubenswrapper[7387]: I0308 03:28:15.599872 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:15.599975 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:15.599975 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:15.599975 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:15.600393 master-0 kubenswrapper[7387]: I0308 03:28:15.599982 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:16.018588 master-0 kubenswrapper[7387]: I0308 03:28:16.018509 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerStarted","Data":"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603"} Mar 08 03:28:16.019024 master-0 kubenswrapper[7387]: I0308 03:28:16.018950 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:28:16.030517 master-0 kubenswrapper[7387]: I0308 03:28:16.030433 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:28:16.600245 master-0 kubenswrapper[7387]: I0308 03:28:16.600141 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:16.600245 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:16.600245 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:16.600245 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:16.600245 master-0 kubenswrapper[7387]: I0308 03:28:16.600242 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:17.599371 master-0 kubenswrapper[7387]: I0308 03:28:17.599274 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:17.599371 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:17.599371 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:17.599371 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:17.599371 master-0 kubenswrapper[7387]: I0308 03:28:17.599357 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:17.760857 master-0 kubenswrapper[7387]: I0308 03:28:17.760734 7387 scope.go:117] "RemoveContainer" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" Mar 08 03:28:18.036383 master-0 kubenswrapper[7387]: I0308 03:28:18.036289 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:28:18.036871 master-0 kubenswrapper[7387]: I0308 03:28:18.036810 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" event={"ID":"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6","Type":"ContainerStarted","Data":"ab5a7437243a4ead6ec04ee8852468ab2a86f61b936bcca20b78eb0efe80898d"} Mar 08 03:28:18.037974 master-0 kubenswrapper[7387]: I0308 03:28:18.037887 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:28:18.599806 master-0 kubenswrapper[7387]: I0308 03:28:18.599728 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:18.599806 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:18.599806 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:18.599806 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:18.600470 master-0 kubenswrapper[7387]: I0308 03:28:18.599821 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:19.600601 master-0 kubenswrapper[7387]: I0308 03:28:19.600369 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:19.600601 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:19.600601 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:19.600601 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:19.601622 master-0 kubenswrapper[7387]: I0308 03:28:19.600660 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:19.759942 master-0 kubenswrapper[7387]: I0308 03:28:19.759832 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:28:19.760239 master-0 kubenswrapper[7387]: E0308 03:28:19.760172 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:28:20.599527 master-0 kubenswrapper[7387]: I0308 03:28:20.599368 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:20.599527 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:20.599527 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:20.599527 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:20.600108 master-0 kubenswrapper[7387]: I0308 03:28:20.599528 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:21.599637 master-0 kubenswrapper[7387]: I0308 03:28:21.599552 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:21.599637 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:21.599637 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:21.599637 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:21.600619 master-0 kubenswrapper[7387]: I0308 03:28:21.599655 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:22.722226 master-0 kubenswrapper[7387]: E0308 03:28:22.722152 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (get nodes master-0)" Mar 08 03:28:23.009741 master-0 kubenswrapper[7387]: I0308 03:28:23.009630 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:23.009741 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:23.009741 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:23.009741 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:23.010455 master-0 kubenswrapper[7387]: I0308 03:28:23.009808 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:23.600745 master-0 kubenswrapper[7387]: I0308 03:28:23.600601 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:23.600745 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:23.600745 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:23.600745 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:23.601322 master-0 kubenswrapper[7387]: I0308 03:28:23.600844 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:24.600247 master-0 kubenswrapper[7387]: I0308 03:28:24.600053 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:24.600247 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:24.600247 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:24.600247 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:24.600247 master-0 kubenswrapper[7387]: I0308 03:28:24.600154 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:24.760081 master-0 kubenswrapper[7387]: I0308 03:28:24.760006 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:28:24.760450 master-0 kubenswrapper[7387]: E0308 03:28:24.760390 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:28:25.599976 master-0 kubenswrapper[7387]: I0308 03:28:25.599867 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:25.599976 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:25.599976 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:25.599976 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:25.600319 master-0 kubenswrapper[7387]: I0308 03:28:25.599994 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:26.598714 master-0 kubenswrapper[7387]: I0308 03:28:26.598596 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:26.598714 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:26.598714 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:26.598714 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:26.599308 master-0 kubenswrapper[7387]: I0308 03:28:26.598739 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:27.600005 master-0 kubenswrapper[7387]: I0308 03:28:27.599883 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:27.600005 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:27.600005 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:27.600005 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:27.601183 master-0 kubenswrapper[7387]: I0308 03:28:27.600036 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:28.601000 master-0 kubenswrapper[7387]: I0308 03:28:28.600893 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:28.601000 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:28.601000 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:28.601000 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:28.601628 master-0 kubenswrapper[7387]: I0308 03:28:28.601022 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:29.600056 master-0 kubenswrapper[7387]: I0308 03:28:29.599979 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:29.600056 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:29.600056 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:29.600056 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:29.601224 master-0 kubenswrapper[7387]: I0308 03:28:29.600135 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:30.313972 master-0 kubenswrapper[7387]: E0308 03:28:30.313538 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:28:30.600169 master-0 kubenswrapper[7387]: I0308 03:28:30.600020 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:30.600169 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:30.600169 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:30.600169 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:30.600169 master-0 kubenswrapper[7387]: I0308 03:28:30.600110 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:30.760894 master-0 kubenswrapper[7387]: I0308 03:28:30.760856 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:28:31.151542 master-0 kubenswrapper[7387]: I0308 03:28:31.151476 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d"} Mar 08 03:28:31.599152 master-0 kubenswrapper[7387]: I0308 03:28:31.599088 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:31.599152 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:31.599152 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:31.599152 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:31.599424 master-0 kubenswrapper[7387]: I0308 03:28:31.599168 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:32.598613 master-0 kubenswrapper[7387]: I0308 03:28:32.598560 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:32.598613 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:32.598613 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:32.598613 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:32.599131 master-0 kubenswrapper[7387]: I0308 03:28:32.598625 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:32.723164 master-0 kubenswrapper[7387]: E0308 03:28:32.723078 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:32.723164 master-0 kubenswrapper[7387]: E0308 03:28:32.723129 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:28:33.320133 master-0 kubenswrapper[7387]: I0308 03:28:33.320046 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:28:33.600062 master-0 kubenswrapper[7387]: I0308 03:28:33.599742 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:33.600062 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:33.600062 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:33.600062 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:33.600062 master-0 kubenswrapper[7387]: I0308 03:28:33.599831 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:33.738729 master-0 kubenswrapper[7387]: I0308 03:28:33.738622 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:28:34.598980 master-0 kubenswrapper[7387]: I0308 03:28:34.598878 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:34.598980 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:34.598980 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:34.598980 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:34.599344 master-0 kubenswrapper[7387]: I0308 03:28:34.598985 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:35.475226 master-0 kubenswrapper[7387]: I0308 03:28:35.475136 7387 status_manager.go:851] "Failed to get status for pod" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" pod="openshift-kube-controller-manager/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 08 03:28:35.600191 master-0 kubenswrapper[7387]: I0308 03:28:35.600104 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:35.600191 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:35.600191 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:35.600191 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:35.600812 master-0 kubenswrapper[7387]: I0308 03:28:35.600208 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:36.599884 master-0 kubenswrapper[7387]: I0308 03:28:36.599829 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:36.599884 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:36.599884 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:36.599884 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:36.600984 master-0 kubenswrapper[7387]: I0308 03:28:36.600936 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:36.739261 master-0 kubenswrapper[7387]: I0308 03:28:36.739176 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:37.599334 master-0 kubenswrapper[7387]: I0308 03:28:37.599249 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:37.599334 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:37.599334 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:37.599334 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:37.599639 master-0 kubenswrapper[7387]: I0308 03:28:37.599340 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:38.599593 master-0 kubenswrapper[7387]: I0308 03:28:38.599484 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:38.599593 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:38.599593 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:38.599593 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:38.600747 master-0 kubenswrapper[7387]: I0308 03:28:38.599600 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:38.759956 master-0 kubenswrapper[7387]: I0308 03:28:38.759815 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:28:38.760625 master-0 kubenswrapper[7387]: E0308 03:28:38.760201 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:28:38.905963 master-0 kubenswrapper[7387]: E0308 03:28:38.905711 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:39.602563 master-0 kubenswrapper[7387]: I0308 03:28:39.601279 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:39.602563 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:39.602563 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:39.602563 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:39.602563 master-0 kubenswrapper[7387]: I0308 03:28:39.601364 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:39.637745 master-0 kubenswrapper[7387]: E0308 03:28:39.637633 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-ppdzb.189abf30d742727c openshift-network-node-identity 8428 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-ppdzb,UID:4fd323ae-11bf-4207-bdce-4d51a9c19dc3,APIVersion:v1,ResourceVersion:3401,FieldPath:spec.containers{approver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:13:39 +0000 UTC,LastTimestamp:2026-03-08 03:26:20.928253349 +0000 UTC m=+917.322729030,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:28:40.235846 master-0 kubenswrapper[7387]: I0308 03:28:40.235720 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ca38d7ba924ac97567c848c4de9b85cf952ac808362ef46dc74a8e038161b464"} Mar 08 03:28:40.235846 master-0 kubenswrapper[7387]: I0308 03:28:40.235799 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"eda1f9d06b58215a69c700807746c7a2bb59d9d2efe4a26dddc2ef461fe516fc"} Mar 08 03:28:40.235846 master-0 kubenswrapper[7387]: I0308 03:28:40.235827 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"bbc358fa2def0911cc6a3fbdff1eaadd0b9f4c2ad7276bfbd2fbe9219f40e336"} Mar 08 03:28:40.598872 master-0 kubenswrapper[7387]: I0308 03:28:40.598803 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:40.598872 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:40.598872 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:40.598872 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:40.599539 master-0 kubenswrapper[7387]: I0308 03:28:40.598878 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:41.251063 master-0 kubenswrapper[7387]: I0308 03:28:41.250886 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d6afb7859936c1ddfbc758d407202a95a5bbef900466cee55affce196b98b8b5"} Mar 08 03:28:41.251063 master-0 kubenswrapper[7387]: I0308 03:28:41.250963 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d98f5fff29d6ff6e9274b1d7396d5c8c1488275b7a2421d6c1826cd6d6a98019"} Mar 08 03:28:41.251859 master-0 kubenswrapper[7387]: I0308 03:28:41.251486 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:28:41.251859 master-0 kubenswrapper[7387]: I0308 03:28:41.251553 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:28:41.599283 master-0 kubenswrapper[7387]: I0308 03:28:41.599157 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:41.599283 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:41.599283 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:41.599283 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:41.599880 master-0 kubenswrapper[7387]: I0308 03:28:41.599307 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:42.599596 master-0 kubenswrapper[7387]: I0308 03:28:42.599477 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:42.599596 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:42.599596 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:42.599596 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:42.599596 master-0 kubenswrapper[7387]: I0308 03:28:42.599572 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:42.790471 master-0 kubenswrapper[7387]: I0308 03:28:42.790370 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:42.790471 master-0 kubenswrapper[7387]: I0308 03:28:42.790458 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:43.273175 master-0 kubenswrapper[7387]: I0308 03:28:43.273107 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/4.log" Mar 08 03:28:43.273965 master-0 kubenswrapper[7387]: I0308 03:28:43.273932 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/3.log" Mar 08 03:28:43.274686 master-0 kubenswrapper[7387]: I0308 03:28:43.274632 7387 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" exitCode=1 Mar 08 03:28:43.274815 master-0 kubenswrapper[7387]: I0308 03:28:43.274684 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerDied","Data":"05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4"} Mar 08 03:28:43.274815 master-0 kubenswrapper[7387]: I0308 03:28:43.274733 7387 scope.go:117] "RemoveContainer" containerID="3a03f9a9aafa4fbc2ea827886673fad2a6a9650b76a61f6d3b1c9550a51441f3" Mar 08 03:28:43.279951 master-0 kubenswrapper[7387]: I0308 03:28:43.279857 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:28:43.282812 master-0 kubenswrapper[7387]: E0308 03:28:43.280874 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:28:43.600048 master-0 kubenswrapper[7387]: I0308 03:28:43.599960 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:43.600048 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:43.600048 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:43.600048 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:43.600048 master-0 kubenswrapper[7387]: I0308 03:28:43.600049 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:44.289221 master-0 kubenswrapper[7387]: I0308 03:28:44.288089 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/4.log" Mar 08 03:28:44.600472 master-0 kubenswrapper[7387]: I0308 03:28:44.600318 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:44.600472 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:44.600472 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:44.600472 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:44.600472 master-0 kubenswrapper[7387]: I0308 03:28:44.600408 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:45.599023 master-0 kubenswrapper[7387]: I0308 03:28:45.598953 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:45.599023 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:45.599023 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:45.599023 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:45.599023 master-0 kubenswrapper[7387]: I0308 03:28:45.599015 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:46.599776 master-0 kubenswrapper[7387]: I0308 03:28:46.599680 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:46.599776 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:46.599776 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:46.599776 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:46.600769 master-0 kubenswrapper[7387]: I0308 03:28:46.599781 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:46.739365 master-0 kubenswrapper[7387]: I0308 03:28:46.739246 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:47.315602 master-0 kubenswrapper[7387]: E0308 03:28:47.315521 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 08 03:28:47.599488 master-0 kubenswrapper[7387]: I0308 03:28:47.599314 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:47.599488 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:47.599488 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:47.599488 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:47.599488 master-0 kubenswrapper[7387]: I0308 03:28:47.599395 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:48.599046 master-0 kubenswrapper[7387]: I0308 03:28:48.598980 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:48.599046 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:48.599046 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:48.599046 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:48.599496 master-0 kubenswrapper[7387]: I0308 03:28:48.599057 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:49.599546 master-0 kubenswrapper[7387]: I0308 03:28:49.599470 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:49.599546 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:49.599546 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:49.599546 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:49.600840 master-0 kubenswrapper[7387]: I0308 03:28:49.599547 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:50.599282 master-0 kubenswrapper[7387]: I0308 03:28:50.599211 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:50.599282 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:50.599282 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:50.599282 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:50.599597 master-0 kubenswrapper[7387]: I0308 03:28:50.599326 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:51.599525 master-0 kubenswrapper[7387]: I0308 03:28:51.599441 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:51.599525 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:51.599525 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:51.599525 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:51.600249 master-0 kubenswrapper[7387]: I0308 03:28:51.599540 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:52.600069 master-0 kubenswrapper[7387]: I0308 03:28:52.599965 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:52.600069 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:52.600069 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:52.600069 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:52.600069 master-0 kubenswrapper[7387]: I0308 03:28:52.600060 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:52.824667 master-0 kubenswrapper[7387]: I0308 03:28:52.824568 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:53.025138 master-0 kubenswrapper[7387]: E0308 03:28:53.025009 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:28:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:28:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:28:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:28:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:53.600430 master-0 kubenswrapper[7387]: I0308 03:28:53.600358 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:53.600430 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:53.600430 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:53.600430 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:53.601169 master-0 kubenswrapper[7387]: I0308 03:28:53.600456 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:53.759644 master-0 kubenswrapper[7387]: I0308 03:28:53.759595 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:28:54.374954 master-0 kubenswrapper[7387]: I0308 03:28:54.374827 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/4.log" Mar 08 03:28:54.374954 master-0 kubenswrapper[7387]: I0308 03:28:54.374888 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a"} Mar 08 03:28:54.600037 master-0 kubenswrapper[7387]: I0308 03:28:54.599782 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:54.600037 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:54.600037 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:54.600037 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:54.600037 master-0 kubenswrapper[7387]: I0308 03:28:54.599867 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:55.599857 master-0 kubenswrapper[7387]: I0308 03:28:55.599788 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:55.599857 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:55.599857 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:55.599857 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:55.600495 master-0 kubenswrapper[7387]: I0308 03:28:55.599865 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:56.600202 master-0 kubenswrapper[7387]: I0308 03:28:56.600077 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:28:56.600202 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:28:56.600202 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:28:56.600202 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:28:56.601410 master-0 kubenswrapper[7387]: I0308 03:28:56.600230 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:28:56.601410 master-0 kubenswrapper[7387]: I0308 03:28:56.600344 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:28:56.601707 master-0 kubenswrapper[7387]: I0308 03:28:56.601628 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5"} pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" containerMessage="Container router failed startup probe, will be restarted" Mar 08 03:28:56.601840 master-0 kubenswrapper[7387]: I0308 03:28:56.601718 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" containerID="cri-o://1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5" gracePeriod=3600 Mar 08 03:28:56.740089 master-0 kubenswrapper[7387]: I0308 03:28:56.739964 7387 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:56.740425 master-0 kubenswrapper[7387]: I0308 03:28:56.740111 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:28:56.741395 master-0 kubenswrapper[7387]: I0308 03:28:56.741332 7387 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 08 03:28:56.741539 master-0 kubenswrapper[7387]: I0308 03:28:56.741455 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" gracePeriod=30 Mar 08 03:28:56.760820 master-0 kubenswrapper[7387]: I0308 03:28:56.760721 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:28:56.761677 master-0 kubenswrapper[7387]: E0308 03:28:56.761569 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:28:56.846124 master-0 kubenswrapper[7387]: I0308 03:28:56.845979 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:28:56.865249 master-0 kubenswrapper[7387]: E0308 03:28:56.865170 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:28:57.406053 master-0 kubenswrapper[7387]: I0308 03:28:57.405992 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" exitCode=2 Mar 08 03:28:57.406325 master-0 kubenswrapper[7387]: I0308 03:28:57.406083 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d"} Mar 08 03:28:57.406325 master-0 kubenswrapper[7387]: I0308 03:28:57.406259 7387 scope.go:117] "RemoveContainer" containerID="21c52df530390b93390189f772de90cc9461c78b27a38ea7bd0553d5255d9c65" Mar 08 03:28:57.407185 master-0 kubenswrapper[7387]: I0308 03:28:57.407129 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:28:57.407620 master-0 kubenswrapper[7387]: E0308 03:28:57.407575 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:28:57.815108 master-0 kubenswrapper[7387]: I0308 03:28:57.815050 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 03:28:57.873140 master-0 kubenswrapper[7387]: I0308 03:28:57.873081 7387 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:28:57.873486 master-0 kubenswrapper[7387]: I0308 03:28:57.873440 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:28:57.873704 master-0 kubenswrapper[7387]: I0308 03:28:57.873451 7387 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:28:57.873804 master-0 kubenswrapper[7387]: I0308 03:28:57.873762 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:01.570114 master-0 kubenswrapper[7387]: I0308 03:29:01.570063 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:29:01.572221 master-0 kubenswrapper[7387]: I0308 03:29:01.570539 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:29:01.572221 master-0 kubenswrapper[7387]: E0308 03:29:01.570742 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:29:03.026195 master-0 kubenswrapper[7387]: E0308 03:29:03.026044 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:04.317110 master-0 kubenswrapper[7387]: E0308 03:29:04.317034 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 08 03:29:04.477989 master-0 kubenswrapper[7387]: I0308 03:29:04.477819 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:29:04.479319 master-0 kubenswrapper[7387]: I0308 03:29:04.479272 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/0.log" Mar 08 03:29:04.479433 master-0 kubenswrapper[7387]: I0308 03:29:04.479349 7387 generic.go:334] "Generic (PLEG): container finished" podID="45212ce7-5f95-402e-93c4-83bac844f77d" containerID="1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3" exitCode=1 Mar 08 03:29:04.479433 master-0 kubenswrapper[7387]: I0308 03:29:04.479391 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerDied","Data":"1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3"} Mar 08 03:29:04.479559 master-0 kubenswrapper[7387]: I0308 03:29:04.479440 7387 scope.go:117] "RemoveContainer" containerID="1bc524d4935db97fb50be5674147f8f9cecf357fca9acfe424caa68101eaec3d" Mar 08 03:29:04.480226 master-0 kubenswrapper[7387]: I0308 03:29:04.480180 7387 scope.go:117] "RemoveContainer" containerID="1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3" Mar 08 03:29:04.480627 master-0 kubenswrapper[7387]: E0308 03:29:04.480585 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-qgg4b_openshift-machine-api(45212ce7-5f95-402e-93c4-83bac844f77d)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" podUID="45212ce7-5f95-402e-93c4-83bac844f77d" Mar 08 03:29:05.488945 master-0 kubenswrapper[7387]: I0308 03:29:05.488827 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:29:07.508981 master-0 kubenswrapper[7387]: I0308 03:29:07.508873 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:29:09.760014 master-0 kubenswrapper[7387]: I0308 03:29:09.759875 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:29:09.760862 master-0 kubenswrapper[7387]: E0308 03:29:09.760266 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:29:12.759864 master-0 kubenswrapper[7387]: I0308 03:29:12.759719 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:29:12.760942 master-0 kubenswrapper[7387]: E0308 03:29:12.760148 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:29:13.026686 master-0 kubenswrapper[7387]: E0308 03:29:13.026540 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:13.641483 master-0 kubenswrapper[7387]: E0308 03:29:13.641240 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-ppdzb.189abf30e7f420a3 openshift-network-node-identity 8433 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-ppdzb,UID:4fd323ae-11bf-4207-bdce-4d51a9c19dc3,APIVersion:v1,ResourceVersion:3401,FieldPath:spec.containers{approver},},Reason:Created,Message:Created container: approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:13:40 +0000 UTC,LastTimestamp:2026-03-08 03:26:21.115325906 +0000 UTC m=+917.509801627,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:29:15.254366 master-0 kubenswrapper[7387]: E0308 03:29:15.254266 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:29:15.578347 master-0 kubenswrapper[7387]: I0308 03:29:15.578248 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:29:15.578347 master-0 kubenswrapper[7387]: I0308 03:29:15.578297 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:29:19.760300 master-0 kubenswrapper[7387]: I0308 03:29:19.760219 7387 scope.go:117] "RemoveContainer" containerID="1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3" Mar 08 03:29:20.636698 master-0 kubenswrapper[7387]: I0308 03:29:20.636634 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:29:20.637297 master-0 kubenswrapper[7387]: I0308 03:29:20.637247 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" event={"ID":"45212ce7-5f95-402e-93c4-83bac844f77d","Type":"ContainerStarted","Data":"e400f643f337bd93479a4bb20ce94010f27e89223dc226a371f807e3646db58e"} Mar 08 03:29:21.319777 master-0 kubenswrapper[7387]: E0308 03:29:21.319324 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 08 03:29:23.026968 master-0 kubenswrapper[7387]: E0308 03:29:23.026896 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:23.760686 master-0 kubenswrapper[7387]: I0308 03:29:23.760584 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:29:23.761099 master-0 kubenswrapper[7387]: E0308 03:29:23.761045 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:29:24.127535 master-0 kubenswrapper[7387]: I0308 03:29:24.127464 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/5.log" Mar 08 03:29:24.128238 master-0 kubenswrapper[7387]: I0308 03:29:24.128198 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/4.log" Mar 08 03:29:24.128301 master-0 kubenswrapper[7387]: I0308 03:29:24.128268 7387 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" exitCode=1 Mar 08 03:29:24.128347 master-0 kubenswrapper[7387]: I0308 03:29:24.128311 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerDied","Data":"b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a"} Mar 08 03:29:24.128390 master-0 kubenswrapper[7387]: I0308 03:29:24.128361 7387 scope.go:117] "RemoveContainer" containerID="bf37b1d91d00a86f3f5cd7c16070ea9424642dc69ceced0dc38c369c92d986f3" Mar 08 03:29:24.129133 master-0 kubenswrapper[7387]: I0308 03:29:24.129083 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:29:24.129482 master-0 kubenswrapper[7387]: E0308 03:29:24.129431 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:29:25.139730 master-0 kubenswrapper[7387]: I0308 03:29:25.139630 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/5.log" Mar 08 03:29:25.761929 master-0 kubenswrapper[7387]: I0308 03:29:25.761841 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:29:25.762481 master-0 kubenswrapper[7387]: E0308 03:29:25.762426 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:29:33.028088 master-0 kubenswrapper[7387]: E0308 03:29:33.027529 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:33.028088 master-0 kubenswrapper[7387]: E0308 03:29:33.027586 7387 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 03:29:35.477358 master-0 kubenswrapper[7387]: I0308 03:29:35.477213 7387 status_manager.go:851] "Failed to get status for pod" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" pod="openshift-kube-apiserver/installer-3-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-3-master-0)" Mar 08 03:29:35.760431 master-0 kubenswrapper[7387]: I0308 03:29:35.760341 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:29:35.760848 master-0 kubenswrapper[7387]: E0308 03:29:35.760795 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:29:36.760317 master-0 kubenswrapper[7387]: I0308 03:29:36.760198 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:29:36.761109 master-0 kubenswrapper[7387]: E0308 03:29:36.760559 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:29:37.761165 master-0 kubenswrapper[7387]: I0308 03:29:37.761037 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:29:37.762037 master-0 kubenswrapper[7387]: E0308 03:29:37.761472 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:29:38.320743 master-0 kubenswrapper[7387]: E0308 03:29:38.320675 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:29:43.303694 master-0 kubenswrapper[7387]: I0308 03:29:43.303571 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerDied","Data":"1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5"} Mar 08 03:29:43.304653 master-0 kubenswrapper[7387]: I0308 03:29:43.303726 7387 scope.go:117] "RemoveContainer" containerID="7fa04e21a63adad667dc50ba88735d25193a1b6333668c5723070e6f990fccc3" Mar 08 03:29:43.304653 master-0 kubenswrapper[7387]: I0308 03:29:43.303339 7387 generic.go:334] "Generic (PLEG): container finished" podID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerID="1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5" exitCode=0 Mar 08 03:29:43.304653 master-0 kubenswrapper[7387]: I0308 03:29:43.304358 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" event={"ID":"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d","Type":"ContainerStarted","Data":"f21c3c523851d31657765a35ad251fd865e14413aceaa830e1b8c26359e06ed6"} Mar 08 03:29:43.597553 master-0 kubenswrapper[7387]: I0308 03:29:43.597365 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:29:43.601080 master-0 kubenswrapper[7387]: I0308 03:29:43.601003 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:43.601080 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:43.601080 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:43.601080 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:43.601407 master-0 kubenswrapper[7387]: I0308 03:29:43.601100 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:44.600267 master-0 kubenswrapper[7387]: I0308 03:29:44.600188 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:44.600267 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:44.600267 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:44.600267 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:44.601500 master-0 kubenswrapper[7387]: I0308 03:29:44.600284 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:45.600162 master-0 kubenswrapper[7387]: I0308 03:29:45.600074 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:45.600162 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:45.600162 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:45.600162 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:45.601249 master-0 kubenswrapper[7387]: I0308 03:29:45.600168 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:46.599630 master-0 kubenswrapper[7387]: I0308 03:29:46.599571 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:46.599630 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:46.599630 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:46.599630 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:46.599630 master-0 kubenswrapper[7387]: I0308 03:29:46.599632 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:47.599784 master-0 kubenswrapper[7387]: I0308 03:29:47.599671 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:47.599784 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:47.599784 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:47.599784 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:47.601121 master-0 kubenswrapper[7387]: I0308 03:29:47.599839 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:47.645714 master-0 kubenswrapper[7387]: E0308 03:29:47.645490 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-ppdzb.189abf30e9586f0c openshift-network-node-identity 8437 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-ppdzb,UID:4fd323ae-11bf-4207-bdce-4d51a9c19dc3,APIVersion:v1,ResourceVersion:3401,FieldPath:spec.containers{approver},},Reason:Started,Message:Started container approver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:13:40 +0000 UTC,LastTimestamp:2026-03-08 03:26:21.130438603 +0000 UTC m=+917.524914324,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:29:48.596631 master-0 kubenswrapper[7387]: I0308 03:29:48.596531 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:29:48.599889 master-0 kubenswrapper[7387]: I0308 03:29:48.599828 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:48.599889 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:48.599889 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:48.599889 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:48.600686 master-0 kubenswrapper[7387]: I0308 03:29:48.599951 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:48.761012 master-0 kubenswrapper[7387]: I0308 03:29:48.760877 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:29:48.761381 master-0 kubenswrapper[7387]: E0308 03:29:48.761327 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:29:49.580811 master-0 kubenswrapper[7387]: E0308 03:29:49.580741 7387 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 08 03:29:49.599656 master-0 kubenswrapper[7387]: I0308 03:29:49.599550 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:49.599656 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:49.599656 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:49.599656 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:49.599656 master-0 kubenswrapper[7387]: I0308 03:29:49.599632 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:50.599704 master-0 kubenswrapper[7387]: I0308 03:29:50.599624 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:50.599704 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:50.599704 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:50.599704 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:50.600799 master-0 kubenswrapper[7387]: I0308 03:29:50.599721 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:50.761419 master-0 kubenswrapper[7387]: I0308 03:29:50.761352 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:29:50.761721 master-0 kubenswrapper[7387]: I0308 03:29:50.761477 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:29:50.761933 master-0 kubenswrapper[7387]: E0308 03:29:50.761838 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" podUID="197afe92-5912-4e90-a477-e3abe001bbc7" Mar 08 03:29:50.762105 master-0 kubenswrapper[7387]: E0308 03:29:50.761937 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:29:51.599857 master-0 kubenswrapper[7387]: I0308 03:29:51.599772 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:51.599857 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:51.599857 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:51.599857 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:51.600818 master-0 kubenswrapper[7387]: I0308 03:29:51.599870 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:52.599691 master-0 kubenswrapper[7387]: I0308 03:29:52.599582 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:52.599691 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:52.599691 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:52.599691 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:52.599691 master-0 kubenswrapper[7387]: I0308 03:29:52.599675 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:53.142784 master-0 kubenswrapper[7387]: E0308 03:29:53.142671 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:29:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:29:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:29:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T03:29:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:29:53.600095 master-0 kubenswrapper[7387]: I0308 03:29:53.599613 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:53.600095 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:53.600095 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:53.600095 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:53.600095 master-0 kubenswrapper[7387]: I0308 03:29:53.599722 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:54.600175 master-0 kubenswrapper[7387]: I0308 03:29:54.600123 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:54.600175 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:54.600175 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:54.600175 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:54.601240 master-0 kubenswrapper[7387]: I0308 03:29:54.601194 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:55.322745 master-0 kubenswrapper[7387]: E0308 03:29:55.322653 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:29:55.601498 master-0 kubenswrapper[7387]: I0308 03:29:55.601278 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:55.601498 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:55.601498 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:55.601498 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:55.601498 master-0 kubenswrapper[7387]: I0308 03:29:55.601382 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:56.600055 master-0 kubenswrapper[7387]: I0308 03:29:56.599952 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:56.600055 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:56.600055 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:56.600055 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:56.600739 master-0 kubenswrapper[7387]: I0308 03:29:56.600061 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:57.599504 master-0 kubenswrapper[7387]: I0308 03:29:57.599411 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:57.599504 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:57.599504 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:57.599504 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:57.600116 master-0 kubenswrapper[7387]: I0308 03:29:57.599572 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:58.599708 master-0 kubenswrapper[7387]: I0308 03:29:58.599650 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:58.599708 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:58.599708 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:58.599708 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:58.600957 master-0 kubenswrapper[7387]: I0308 03:29:58.600885 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:29:59.599753 master-0 kubenswrapper[7387]: I0308 03:29:59.599675 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:29:59.599753 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:29:59.599753 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:29:59.599753 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:29:59.600758 master-0 kubenswrapper[7387]: I0308 03:29:59.599759 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:00.599468 master-0 kubenswrapper[7387]: I0308 03:30:00.599395 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:00.599468 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:00.599468 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:00.599468 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:00.600750 master-0 kubenswrapper[7387]: I0308 03:30:00.599478 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:00.761096 master-0 kubenswrapper[7387]: I0308 03:30:00.761049 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:00.761594 master-0 kubenswrapper[7387]: E0308 03:30:00.761569 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:01.599352 master-0 kubenswrapper[7387]: I0308 03:30:01.599283 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:01.599352 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:01.599352 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:01.599352 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:01.599651 master-0 kubenswrapper[7387]: I0308 03:30:01.599378 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:02.600026 master-0 kubenswrapper[7387]: I0308 03:30:02.599942 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:02.600026 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:02.600026 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:02.600026 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:02.601245 master-0 kubenswrapper[7387]: I0308 03:30:02.600034 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:03.143787 master-0 kubenswrapper[7387]: E0308 03:30:03.143696 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:03.599461 master-0 kubenswrapper[7387]: I0308 03:30:03.599382 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:03.599461 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:03.599461 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:03.599461 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:03.599838 master-0 kubenswrapper[7387]: I0308 03:30:03.599469 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:03.765379 master-0 kubenswrapper[7387]: I0308 03:30:03.765293 7387 scope.go:117] "RemoveContainer" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" Mar 08 03:30:04.489900 master-0 kubenswrapper[7387]: I0308 03:30:04.489821 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/4.log" Mar 08 03:30:04.490629 master-0 kubenswrapper[7387]: I0308 03:30:04.490579 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" event={"ID":"197afe92-5912-4e90-a477-e3abe001bbc7","Type":"ContainerStarted","Data":"400bc47f59a44bd7ab8b8e330655c0140183015b0be727e7a990e21e1158ecfe"} Mar 08 03:30:04.599960 master-0 kubenswrapper[7387]: I0308 03:30:04.599861 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:04.599960 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:04.599960 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:04.599960 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:04.600367 master-0 kubenswrapper[7387]: I0308 03:30:04.599984 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:04.759687 master-0 kubenswrapper[7387]: I0308 03:30:04.759606 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:30:04.760068 master-0 kubenswrapper[7387]: E0308 03:30:04.760016 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:30:05.600344 master-0 kubenswrapper[7387]: I0308 03:30:05.600272 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:05.600344 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:05.600344 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:05.600344 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:05.601376 master-0 kubenswrapper[7387]: I0308 03:30:05.600351 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:06.599764 master-0 kubenswrapper[7387]: I0308 03:30:06.599673 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:06.599764 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:06.599764 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:06.599764 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:06.600314 master-0 kubenswrapper[7387]: I0308 03:30:06.599775 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:07.599254 master-0 kubenswrapper[7387]: I0308 03:30:07.599162 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:07.599254 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:07.599254 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:07.599254 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:07.599254 master-0 kubenswrapper[7387]: I0308 03:30:07.599244 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:08.599346 master-0 kubenswrapper[7387]: I0308 03:30:08.599220 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:08.599346 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:08.599346 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:08.599346 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:08.600613 master-0 kubenswrapper[7387]: I0308 03:30:08.599339 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:09.600623 master-0 kubenswrapper[7387]: I0308 03:30:09.600522 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:09.600623 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:09.600623 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:09.600623 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:09.600623 master-0 kubenswrapper[7387]: I0308 03:30:09.600616 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:10.599119 master-0 kubenswrapper[7387]: I0308 03:30:10.599017 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:10.599119 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:10.599119 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:10.599119 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:10.599119 master-0 kubenswrapper[7387]: I0308 03:30:10.599097 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:11.598794 master-0 kubenswrapper[7387]: I0308 03:30:11.598676 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:11.598794 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:11.598794 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:11.598794 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:11.598794 master-0 kubenswrapper[7387]: I0308 03:30:11.598732 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:12.324366 master-0 kubenswrapper[7387]: E0308 03:30:12.324208 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:30:12.562160 master-0 kubenswrapper[7387]: I0308 03:30:12.561997 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/1.log" Mar 08 03:30:12.562160 master-0 kubenswrapper[7387]: I0308 03:30:12.562094 7387 generic.go:334] "Generic (PLEG): container finished" podID="3d69f101-60a8-41fd-bcda-4eb654c626a2" containerID="c2ca8d040bfba75b786491a7f494a16b01e68ff5762368d65a86118d64a49cb6" exitCode=0 Mar 08 03:30:12.562160 master-0 kubenswrapper[7387]: I0308 03:30:12.562154 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerDied","Data":"c2ca8d040bfba75b786491a7f494a16b01e68ff5762368d65a86118d64a49cb6"} Mar 08 03:30:12.562624 master-0 kubenswrapper[7387]: I0308 03:30:12.562214 7387 scope.go:117] "RemoveContainer" containerID="35a84530b9b77d1b843b53e9598fc2ad2b53c4132c228552e8ac9e5d303df9ce" Mar 08 03:30:12.564113 master-0 kubenswrapper[7387]: I0308 03:30:12.563215 7387 scope.go:117] "RemoveContainer" containerID="c2ca8d040bfba75b786491a7f494a16b01e68ff5762368d65a86118d64a49cb6" Mar 08 03:30:12.599889 master-0 kubenswrapper[7387]: I0308 03:30:12.599795 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:12.599889 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:12.599889 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:12.599889 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:12.601024 master-0 kubenswrapper[7387]: I0308 03:30:12.599932 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:13.144586 master-0 kubenswrapper[7387]: E0308 03:30:13.144479 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:13.587074 master-0 kubenswrapper[7387]: I0308 03:30:13.586996 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" event={"ID":"3d69f101-60a8-41fd-bcda-4eb654c626a2","Type":"ContainerStarted","Data":"dd946209599889e5a16e45a180593f6f75dde3d6d914292c0cdaadec1c1176b3"} Mar 08 03:30:13.589853 master-0 kubenswrapper[7387]: I0308 03:30:13.589763 7387 generic.go:334] "Generic (PLEG): container finished" podID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerID="a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7" exitCode=0 Mar 08 03:30:13.589853 master-0 kubenswrapper[7387]: I0308 03:30:13.589838 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerDied","Data":"a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7"} Mar 08 03:30:13.590560 master-0 kubenswrapper[7387]: I0308 03:30:13.590526 7387 scope.go:117] "RemoveContainer" containerID="a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7" Mar 08 03:30:13.599880 master-0 kubenswrapper[7387]: I0308 03:30:13.599818 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:13.599880 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:13.599880 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:13.599880 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:13.599880 master-0 kubenswrapper[7387]: I0308 03:30:13.599869 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:14.600030 master-0 kubenswrapper[7387]: I0308 03:30:14.599964 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:14.600030 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:14.600030 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:14.600030 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:14.601334 master-0 kubenswrapper[7387]: I0308 03:30:14.600041 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:14.602088 master-0 kubenswrapper[7387]: I0308 03:30:14.602000 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerStarted","Data":"510bc972f02c805726ce0e8b26c9f46e3ffb7b53590b52c60f2d8c1b5c1b2518"} Mar 08 03:30:14.602647 master-0 kubenswrapper[7387]: I0308 03:30:14.602495 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:30:14.760900 master-0 kubenswrapper[7387]: I0308 03:30:14.760845 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:14.761660 master-0 kubenswrapper[7387]: E0308 03:30:14.761623 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:15.601274 master-0 kubenswrapper[7387]: I0308 03:30:15.601171 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:15.601274 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:15.601274 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:15.601274 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:15.602505 master-0 kubenswrapper[7387]: I0308 03:30:15.602269 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:15.604446 master-0 kubenswrapper[7387]: I0308 03:30:15.602773 7387 patch_prober.go:28] interesting pod/route-controller-manager-694774cfc9-r5gkh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:30:15.604446 master-0 kubenswrapper[7387]: I0308 03:30:15.602999 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:15.615319 master-0 kubenswrapper[7387]: I0308 03:30:15.615246 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/2.log" Mar 08 03:30:15.615470 master-0 kubenswrapper[7387]: I0308 03:30:15.615346 7387 generic.go:334] "Generic (PLEG): container finished" podID="89e15db4-c541-4d53-878d-706fa022f970" containerID="00d9ac3c9b6193b454aa568c1a383fab452df49e6573435f6a143be4c2708486" exitCode=0 Mar 08 03:30:15.615470 master-0 kubenswrapper[7387]: I0308 03:30:15.615409 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerDied","Data":"00d9ac3c9b6193b454aa568c1a383fab452df49e6573435f6a143be4c2708486"} Mar 08 03:30:15.615622 master-0 kubenswrapper[7387]: I0308 03:30:15.615496 7387 scope.go:117] "RemoveContainer" containerID="279e20703ffc1523384ecb744bab2f75686744f29f2bd2fc07a960cf86d7af7c" Mar 08 03:30:15.616477 master-0 kubenswrapper[7387]: I0308 03:30:15.616421 7387 scope.go:117] "RemoveContainer" containerID="00d9ac3c9b6193b454aa568c1a383fab452df49e6573435f6a143be4c2708486" Mar 08 03:30:16.028657 master-0 kubenswrapper[7387]: I0308 03:30:16.028552 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 03:30:16.029340 master-0 kubenswrapper[7387]: E0308 03:30:16.029275 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:30:16.029340 master-0 kubenswrapper[7387]: I0308 03:30:16.029318 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:30:16.029598 master-0 kubenswrapper[7387]: E0308 03:30:16.029360 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:30:16.029598 master-0 kubenswrapper[7387]: I0308 03:30:16.029378 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:30:16.029598 master-0 kubenswrapper[7387]: E0308 03:30:16.029405 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:30:16.029598 master-0 kubenswrapper[7387]: I0308 03:30:16.029423 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:30:16.029978 master-0 kubenswrapper[7387]: I0308 03:30:16.029769 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:30:16.029978 master-0 kubenswrapper[7387]: I0308 03:30:16.029841 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:30:16.029978 master-0 kubenswrapper[7387]: I0308 03:30:16.029875 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:30:16.030990 master-0 kubenswrapper[7387]: I0308 03:30:16.030940 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.034400 master-0 kubenswrapper[7387]: I0308 03:30:16.034318 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-2tj6k" Mar 08 03:30:16.040348 master-0 kubenswrapper[7387]: I0308 03:30:16.040159 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 03:30:16.045875 master-0 kubenswrapper[7387]: I0308 03:30:16.045275 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 03:30:16.208513 master-0 kubenswrapper[7387]: I0308 03:30:16.208391 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.208801 master-0 kubenswrapper[7387]: I0308 03:30:16.208771 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.209013 master-0 kubenswrapper[7387]: I0308 03:30:16.208981 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.310762 master-0 kubenswrapper[7387]: I0308 03:30:16.310722 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.311107 master-0 kubenswrapper[7387]: I0308 03:30:16.311075 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.311421 master-0 kubenswrapper[7387]: I0308 03:30:16.311381 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.311660 master-0 kubenswrapper[7387]: I0308 03:30:16.310939 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.311867 master-0 kubenswrapper[7387]: I0308 03:30:16.311500 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.600018 master-0 kubenswrapper[7387]: I0308 03:30:16.599939 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:16.600018 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:16.600018 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:16.600018 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:16.600274 master-0 kubenswrapper[7387]: I0308 03:30:16.600050 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:16.615977 master-0 kubenswrapper[7387]: I0308 03:30:16.615878 7387 patch_prober.go:28] interesting pod/route-controller-manager-694774cfc9-r5gkh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:30:16.616517 master-0 kubenswrapper[7387]: I0308 03:30:16.616022 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:16.626330 master-0 kubenswrapper[7387]: I0308 03:30:16.626271 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" event={"ID":"89e15db4-c541-4d53-878d-706fa022f970","Type":"ContainerStarted","Data":"950f94c6911e149b7be2dd3d4f7aa50d7480e512077474a7b613f475690032ec"} Mar 08 03:30:16.628298 master-0 kubenswrapper[7387]: I0308 03:30:16.628256 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/0.log" Mar 08 03:30:16.629381 master-0 kubenswrapper[7387]: I0308 03:30:16.629291 7387 generic.go:334] "Generic (PLEG): container finished" podID="8c65557b-9566-49f1-a049-fe492ca201b5" containerID="a06749d70fe898a009e67138a8c24210d9e9c5e2f8da6592f0e5a82371873c57" exitCode=255 Mar 08 03:30:16.629478 master-0 kubenswrapper[7387]: I0308 03:30:16.629374 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" event={"ID":"8c65557b-9566-49f1-a049-fe492ca201b5","Type":"ContainerDied","Data":"a06749d70fe898a009e67138a8c24210d9e9c5e2f8da6592f0e5a82371873c57"} Mar 08 03:30:16.630058 master-0 kubenswrapper[7387]: I0308 03:30:16.630019 7387 scope.go:117] "RemoveContainer" containerID="a06749d70fe898a009e67138a8c24210d9e9c5e2f8da6592f0e5a82371873c57" Mar 08 03:30:16.736384 master-0 kubenswrapper[7387]: I0308 03:30:16.736321 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:16.971695 master-0 kubenswrapper[7387]: I0308 03:30:16.971606 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:17.468149 master-0 kubenswrapper[7387]: I0308 03:30:17.468083 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Mar 08 03:30:17.468602 master-0 kubenswrapper[7387]: W0308 03:30:17.468520 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod627f0501_8b6a_4bc7_b610_355a0661f385.slice/crio-b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e WatchSource:0}: Error finding container b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e: Status 404 returned error can't find the container with id b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e Mar 08 03:30:17.599256 master-0 kubenswrapper[7387]: I0308 03:30:17.599180 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:17.599256 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:17.599256 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:17.599256 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:17.599829 master-0 kubenswrapper[7387]: I0308 03:30:17.599275 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:17.648248 master-0 kubenswrapper[7387]: I0308 03:30:17.648122 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/0.log" Mar 08 03:30:17.649729 master-0 kubenswrapper[7387]: I0308 03:30:17.649649 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" event={"ID":"8c65557b-9566-49f1-a049-fe492ca201b5","Type":"ContainerStarted","Data":"673e5b2211a53fd146cbdd279adf111aa2e32ec24a1ac1afa49d1a3f4ddebb7c"} Mar 08 03:30:17.653554 master-0 kubenswrapper[7387]: I0308 03:30:17.653397 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"627f0501-8b6a-4bc7-b610-355a0661f385","Type":"ContainerStarted","Data":"b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e"} Mar 08 03:30:18.578262 master-0 kubenswrapper[7387]: I0308 03:30:18.578175 7387 patch_prober.go:28] interesting pod/route-controller-manager-694774cfc9-r5gkh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:30:18.578659 master-0 kubenswrapper[7387]: I0308 03:30:18.578301 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:18.599895 master-0 kubenswrapper[7387]: I0308 03:30:18.599778 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:18.599895 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:18.599895 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:18.599895 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:18.601738 master-0 kubenswrapper[7387]: I0308 03:30:18.599985 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:18.667589 master-0 kubenswrapper[7387]: I0308 03:30:18.667505 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/3.log" Mar 08 03:30:18.668527 master-0 kubenswrapper[7387]: I0308 03:30:18.667607 7387 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="ba71a05bad6a20ee6c802a92e9435b17cd722af277a98de423aa90bee7e17757" exitCode=0 Mar 08 03:30:18.668527 master-0 kubenswrapper[7387]: I0308 03:30:18.668180 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerDied","Data":"ba71a05bad6a20ee6c802a92e9435b17cd722af277a98de423aa90bee7e17757"} Mar 08 03:30:18.668527 master-0 kubenswrapper[7387]: I0308 03:30:18.668331 7387 scope.go:117] "RemoveContainer" containerID="c570ba340cf097b9a186b03c44668b2eb412d97ceaff7d6fc9d02e3d84a0cdb3" Mar 08 03:30:18.669275 master-0 kubenswrapper[7387]: I0308 03:30:18.669210 7387 scope.go:117] "RemoveContainer" containerID="ba71a05bad6a20ee6c802a92e9435b17cd722af277a98de423aa90bee7e17757" Mar 08 03:30:18.671365 master-0 kubenswrapper[7387]: I0308 03:30:18.671313 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"627f0501-8b6a-4bc7-b610-355a0661f385","Type":"ContainerStarted","Data":"39acd779a6b4efc5eaa5408d29d32ff65cfd712c0fbed2aa3652c2244b17d9bc"} Mar 08 03:30:18.742314 master-0 kubenswrapper[7387]: I0308 03:30:18.742230 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=2.742201625 podStartE2EDuration="2.742201625s" podCreationTimestamp="2026-03-08 03:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:30:18.740692426 +0000 UTC m=+1155.135168137" watchObservedRunningTime="2026-03-08 03:30:18.742201625 +0000 UTC m=+1155.136677316" Mar 08 03:30:19.600247 master-0 kubenswrapper[7387]: I0308 03:30:19.600145 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:19.600247 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:19.600247 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:19.600247 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:19.601966 master-0 kubenswrapper[7387]: I0308 03:30:19.600259 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:19.684749 master-0 kubenswrapper[7387]: I0308 03:30:19.684627 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" event={"ID":"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b","Type":"ContainerStarted","Data":"1af682b4943615c5c2b7ef8840078ec5418d74397ffac005004068932171075c"} Mar 08 03:30:19.760615 master-0 kubenswrapper[7387]: I0308 03:30:19.760499 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:30:19.760956 master-0 kubenswrapper[7387]: E0308 03:30:19.760838 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:30:20.600106 master-0 kubenswrapper[7387]: I0308 03:30:20.600013 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:20.600106 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:20.600106 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:20.600106 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:20.600672 master-0 kubenswrapper[7387]: I0308 03:30:20.600137 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:20.698702 master-0 kubenswrapper[7387]: I0308 03:30:20.698604 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/2.log" Mar 08 03:30:20.699562 master-0 kubenswrapper[7387]: I0308 03:30:20.698748 7387 generic.go:334] "Generic (PLEG): container finished" podID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerID="5c0ec338f20c1d3f7f3579ad9e29304940d141e2ae52320c796bdc9c2392d2b5" exitCode=0 Mar 08 03:30:20.699562 master-0 kubenswrapper[7387]: I0308 03:30:20.698830 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerDied","Data":"5c0ec338f20c1d3f7f3579ad9e29304940d141e2ae52320c796bdc9c2392d2b5"} Mar 08 03:30:20.699562 master-0 kubenswrapper[7387]: I0308 03:30:20.698946 7387 scope.go:117] "RemoveContainer" containerID="dd4d219059033c12e8a9f8e3d34a3c3099d9ccfe2b147440dd167716ec750fdc" Mar 08 03:30:20.700196 master-0 kubenswrapper[7387]: I0308 03:30:20.700152 7387 scope.go:117] "RemoveContainer" containerID="5c0ec338f20c1d3f7f3579ad9e29304940d141e2ae52320c796bdc9c2392d2b5" Mar 08 03:30:21.600102 master-0 kubenswrapper[7387]: I0308 03:30:21.599996 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:21.600102 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:21.600102 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:21.600102 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:21.600517 master-0 kubenswrapper[7387]: I0308 03:30:21.600107 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:21.650021 master-0 kubenswrapper[7387]: E0308 03:30:21.649843 7387 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{ingress-operator-677db989d6-4bpl8.189abf9a9d599e9d openshift-ingress-operator 10097 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress-operator,Name:ingress-operator-677db989d6-4bpl8,UID:197afe92-5912-4e90-a477-e3abe001bbc7,APIVersion:v1,ResourceVersion:3636,FieldPath:spec.containers{ingress-operator},},Reason:BackOff,Message:Back-off restarting failed container ingress-operator in pod ingress-operator-677db989d6-4bpl8_openshift-ingress-operator(197afe92-5912-4e90-a477-e3abe001bbc7),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:21:14 +0000 UTC,LastTimestamp:2026-03-08 03:26:26.760253822 +0000 UTC m=+923.154729543,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:30:21.713229 master-0 kubenswrapper[7387]: I0308 03:30:21.713119 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" event={"ID":"90ef7c0a-7c6f-45aa-865d-1e247110b265","Type":"ContainerStarted","Data":"6c8e08df0783ed89aaecfc39ac83ef62a0d9ae67ffffc742539acc908f4d04ea"} Mar 08 03:30:22.599479 master-0 kubenswrapper[7387]: I0308 03:30:22.599371 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:22.599479 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:22.599479 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:22.599479 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:22.600004 master-0 kubenswrapper[7387]: I0308 03:30:22.599494 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:23.145950 master-0 kubenswrapper[7387]: E0308 03:30:23.145818 7387 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 08 03:30:23.600730 master-0 kubenswrapper[7387]: I0308 03:30:23.600593 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:23.600730 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:23.600730 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:23.600730 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:23.601219 master-0 kubenswrapper[7387]: I0308 03:30:23.600765 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:24.599378 master-0 kubenswrapper[7387]: I0308 03:30:24.599308 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:24.599378 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:24.599378 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:24.599378 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:24.600126 master-0 kubenswrapper[7387]: I0308 03:30:24.599387 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:25.599040 master-0 kubenswrapper[7387]: I0308 03:30:25.598946 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:25.599040 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:25.599040 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:25.599040 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:25.599891 master-0 kubenswrapper[7387]: I0308 03:30:25.599042 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:26.601134 master-0 kubenswrapper[7387]: I0308 03:30:26.601046 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:26.601134 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:26.601134 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:26.601134 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:26.602246 master-0 kubenswrapper[7387]: I0308 03:30:26.601159 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:27.599795 master-0 kubenswrapper[7387]: I0308 03:30:27.599685 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:27.599795 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:27.599795 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:27.599795 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:27.600277 master-0 kubenswrapper[7387]: I0308 03:30:27.599791 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:28.577497 master-0 kubenswrapper[7387]: I0308 03:30:28.577377 7387 patch_prober.go:28] interesting pod/route-controller-manager-694774cfc9-r5gkh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:30:28.577497 master-0 kubenswrapper[7387]: I0308 03:30:28.577469 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.76:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:30:28.599747 master-0 kubenswrapper[7387]: I0308 03:30:28.599699 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:28.599747 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:28.599747 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:28.599747 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:28.600305 master-0 kubenswrapper[7387]: I0308 03:30:28.600251 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:29.326872 master-0 kubenswrapper[7387]: E0308 03:30:29.326741 7387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 08 03:30:29.600329 master-0 kubenswrapper[7387]: I0308 03:30:29.600134 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:29.600329 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:29.600329 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:29.600329 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:29.600329 master-0 kubenswrapper[7387]: I0308 03:30:29.600230 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:29.760721 master-0 kubenswrapper[7387]: I0308 03:30:29.760640 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:29.761182 master-0 kubenswrapper[7387]: E0308 03:30:29.761128 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:30.599537 master-0 kubenswrapper[7387]: I0308 03:30:30.599447 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:30.599537 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:30.599537 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:30.599537 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:30.600044 master-0 kubenswrapper[7387]: I0308 03:30:30.599538 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:30.760624 master-0 kubenswrapper[7387]: I0308 03:30:30.760539 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:30:30.761412 master-0 kubenswrapper[7387]: E0308 03:30:30.761163 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:30:31.449841 master-0 kubenswrapper[7387]: I0308 03:30:31.449744 7387 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-dn4ll container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 08 03:30:31.449841 master-0 kubenswrapper[7387]: I0308 03:30:31.449820 7387 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" podUID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 08 03:30:31.599891 master-0 kubenswrapper[7387]: I0308 03:30:31.599801 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:31.599891 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:31.599891 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:31.599891 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:31.599891 master-0 kubenswrapper[7387]: I0308 03:30:31.599888 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:32.599058 master-0 kubenswrapper[7387]: I0308 03:30:32.598966 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:32.599058 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:32.599058 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:32.599058 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:32.600011 master-0 kubenswrapper[7387]: I0308 03:30:32.599084 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:32.806875 master-0 kubenswrapper[7387]: I0308 03:30:32.806822 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca_service-ca-84bfdbbb7f-jnpl5_7af634f0-65ac-402a-acd6-a8aad11b37ab/service-ca-controller/1.log" Mar 08 03:30:32.807138 master-0 kubenswrapper[7387]: I0308 03:30:32.806888 7387 generic.go:334] "Generic (PLEG): container finished" podID="7af634f0-65ac-402a-acd6-a8aad11b37ab" containerID="4ba849afa6c1096c68700ba2a3716f297bd7a9a7ae2cf94f600da7b5f14c3033" exitCode=0 Mar 08 03:30:32.807138 master-0 kubenswrapper[7387]: I0308 03:30:32.806979 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerDied","Data":"4ba849afa6c1096c68700ba2a3716f297bd7a9a7ae2cf94f600da7b5f14c3033"} Mar 08 03:30:32.807138 master-0 kubenswrapper[7387]: I0308 03:30:32.807021 7387 scope.go:117] "RemoveContainer" containerID="7d5086bc52f5bb65f0e405da68bda521bfa3fc867442a2ce84f387697f4853be" Mar 08 03:30:32.807717 master-0 kubenswrapper[7387]: I0308 03:30:32.807613 7387 scope.go:117] "RemoveContainer" containerID="4ba849afa6c1096c68700ba2a3716f297bd7a9a7ae2cf94f600da7b5f14c3033" Mar 08 03:30:32.815517 master-0 kubenswrapper[7387]: I0308 03:30:32.815390 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/2.log" Mar 08 03:30:32.815517 master-0 kubenswrapper[7387]: I0308 03:30:32.815442 7387 generic.go:334] "Generic (PLEG): container finished" podID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" containerID="62e972b8bed8e15ecb54cf31905c8e961d34ba4506e8988ac047b3329919293e" exitCode=0 Mar 08 03:30:32.815517 master-0 kubenswrapper[7387]: I0308 03:30:32.815513 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerDied","Data":"62e972b8bed8e15ecb54cf31905c8e961d34ba4506e8988ac047b3329919293e"} Mar 08 03:30:32.817845 master-0 kubenswrapper[7387]: I0308 03:30:32.817064 7387 scope.go:117] "RemoveContainer" containerID="62e972b8bed8e15ecb54cf31905c8e961d34ba4506e8988ac047b3329919293e" Mar 08 03:30:32.820208 master-0 kubenswrapper[7387]: I0308 03:30:32.819894 7387 generic.go:334] "Generic (PLEG): container finished" podID="42b9f2d1-da5c-46b5-b131-d206fa37d436" containerID="9ebffe5493b09d3a093aa85180c37071c3a0b4e8c5ef6f4c98982166c5ae432d" exitCode=0 Mar 08 03:30:32.820208 master-0 kubenswrapper[7387]: I0308 03:30:32.819967 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" event={"ID":"42b9f2d1-da5c-46b5-b131-d206fa37d436","Type":"ContainerDied","Data":"9ebffe5493b09d3a093aa85180c37071c3a0b4e8c5ef6f4c98982166c5ae432d"} Mar 08 03:30:32.820532 master-0 kubenswrapper[7387]: I0308 03:30:32.820393 7387 scope.go:117] "RemoveContainer" containerID="9ebffe5493b09d3a093aa85180c37071c3a0b4e8c5ef6f4c98982166c5ae432d" Mar 08 03:30:32.825008 master-0 kubenswrapper[7387]: I0308 03:30:32.824973 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/2.log" Mar 08 03:30:32.825274 master-0 kubenswrapper[7387]: I0308 03:30:32.825023 7387 generic.go:334] "Generic (PLEG): container finished" podID="2468d2a3-ec65-4888-a86a-3f66fa311f56" containerID="f750a9def8422866b22d39a2cd3d196c793426a1bcfc147c9836ec1f7382a781" exitCode=0 Mar 08 03:30:32.825274 master-0 kubenswrapper[7387]: I0308 03:30:32.825092 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerDied","Data":"f750a9def8422866b22d39a2cd3d196c793426a1bcfc147c9836ec1f7382a781"} Mar 08 03:30:32.826243 master-0 kubenswrapper[7387]: I0308 03:30:32.825814 7387 scope.go:117] "RemoveContainer" containerID="f750a9def8422866b22d39a2cd3d196c793426a1bcfc147c9836ec1f7382a781" Mar 08 03:30:32.828700 master-0 kubenswrapper[7387]: I0308 03:30:32.828508 7387 generic.go:334] "Generic (PLEG): container finished" podID="965f8eef-c5af-499b-b1db-cf63072781cc" containerID="148123547b19a17f13384ac0f521efe52ca11a8ba51861fa9546df274d15fce9" exitCode=0 Mar 08 03:30:32.828700 master-0 kubenswrapper[7387]: I0308 03:30:32.828557 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" event={"ID":"965f8eef-c5af-499b-b1db-cf63072781cc","Type":"ContainerDied","Data":"148123547b19a17f13384ac0f521efe52ca11a8ba51861fa9546df274d15fce9"} Mar 08 03:30:32.828858 master-0 kubenswrapper[7387]: I0308 03:30:32.828831 7387 scope.go:117] "RemoveContainer" containerID="148123547b19a17f13384ac0f521efe52ca11a8ba51861fa9546df274d15fce9" Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.832397 7387 generic.go:334] "Generic (PLEG): container finished" podID="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" containerID="8520a5f64276e58759b21a4f5abc65748412aaf732608a2bdda90bcabbccfe1e" exitCode=0 Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.832527 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" event={"ID":"81abc17a-8a51-44e2-a5df-5ddb394a9fa6","Type":"ContainerDied","Data":"8520a5f64276e58759b21a4f5abc65748412aaf732608a2bdda90bcabbccfe1e"} Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.833260 7387 scope.go:117] "RemoveContainer" containerID="8520a5f64276e58759b21a4f5abc65748412aaf732608a2bdda90bcabbccfe1e" Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.840310 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/3.log" Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.840667 7387 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="3ffe89ef5d1c010872dcc8d98905a0b3c74a65a6e59320222ab4708980d7907c" exitCode=0 Mar 08 03:30:32.840869 master-0 kubenswrapper[7387]: I0308 03:30:32.840722 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerDied","Data":"3ffe89ef5d1c010872dcc8d98905a0b3c74a65a6e59320222ab4708980d7907c"} Mar 08 03:30:32.841242 master-0 kubenswrapper[7387]: I0308 03:30:32.841214 7387 scope.go:117] "RemoveContainer" containerID="3ffe89ef5d1c010872dcc8d98905a0b3c74a65a6e59320222ab4708980d7907c" Mar 08 03:30:32.846187 master-0 kubenswrapper[7387]: I0308 03:30:32.843690 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/2.log" Mar 08 03:30:32.846187 master-0 kubenswrapper[7387]: I0308 03:30:32.843736 7387 generic.go:334] "Generic (PLEG): container finished" podID="5a058138-8039-4841-821b-7ee5bb8648e4" containerID="15751ae441f57c6481deb8b5cc3f72916e46489440f9eb8189b8afd0e24064b8" exitCode=0 Mar 08 03:30:32.846187 master-0 kubenswrapper[7387]: I0308 03:30:32.843795 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerDied","Data":"15751ae441f57c6481deb8b5cc3f72916e46489440f9eb8189b8afd0e24064b8"} Mar 08 03:30:32.846187 master-0 kubenswrapper[7387]: I0308 03:30:32.844258 7387 scope.go:117] "RemoveContainer" containerID="15751ae441f57c6481deb8b5cc3f72916e46489440f9eb8189b8afd0e24064b8" Mar 08 03:30:32.854834 master-0 kubenswrapper[7387]: I0308 03:30:32.853262 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="85ebc2aadcc00fbddf926f6ab17ab8c204935ad575ebd07cf7adcfc06b4a6c08" exitCode=0 Mar 08 03:30:32.854834 master-0 kubenswrapper[7387]: I0308 03:30:32.853404 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"85ebc2aadcc00fbddf926f6ab17ab8c204935ad575ebd07cf7adcfc06b4a6c08"} Mar 08 03:30:32.854834 master-0 kubenswrapper[7387]: I0308 03:30:32.854039 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:32.854834 master-0 kubenswrapper[7387]: I0308 03:30:32.854058 7387 scope.go:117] "RemoveContainer" containerID="85ebc2aadcc00fbddf926f6ab17ab8c204935ad575ebd07cf7adcfc06b4a6c08" Mar 08 03:30:32.857280 master-0 kubenswrapper[7387]: I0308 03:30:32.857249 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/2.log" Mar 08 03:30:32.857362 master-0 kubenswrapper[7387]: I0308 03:30:32.857290 7387 generic.go:334] "Generic (PLEG): container finished" podID="1d446527-f3fd-4a37-a980-7445031928d1" containerID="f7da8d6f43578f41e1847ca0341da34176f025a0cb8ed318bf310486d31635fa" exitCode=0 Mar 08 03:30:32.857362 master-0 kubenswrapper[7387]: I0308 03:30:32.857343 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerDied","Data":"f7da8d6f43578f41e1847ca0341da34176f025a0cb8ed318bf310486d31635fa"} Mar 08 03:30:32.857787 master-0 kubenswrapper[7387]: I0308 03:30:32.857757 7387 scope.go:117] "RemoveContainer" containerID="f7da8d6f43578f41e1847ca0341da34176f025a0cb8ed318bf310486d31635fa" Mar 08 03:30:32.860427 master-0 kubenswrapper[7387]: I0308 03:30:32.860396 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/0.log" Mar 08 03:30:32.860655 master-0 kubenswrapper[7387]: I0308 03:30:32.860625 7387 generic.go:334] "Generic (PLEG): container finished" podID="2ffe00fd-6834-4a5b-8b0b-b467d284f23c" containerID="2858485e79b00900bd163b6f7b2d0d61e9d6beabaa41767ec01d73da348ed50d" exitCode=255 Mar 08 03:30:32.860710 master-0 kubenswrapper[7387]: I0308 03:30:32.860670 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" event={"ID":"2ffe00fd-6834-4a5b-8b0b-b467d284f23c","Type":"ContainerDied","Data":"2858485e79b00900bd163b6f7b2d0d61e9d6beabaa41767ec01d73da348ed50d"} Mar 08 03:30:32.861076 master-0 kubenswrapper[7387]: I0308 03:30:32.861049 7387 scope.go:117] "RemoveContainer" containerID="2858485e79b00900bd163b6f7b2d0d61e9d6beabaa41767ec01d73da348ed50d" Mar 08 03:30:32.874352 master-0 kubenswrapper[7387]: I0308 03:30:32.872142 7387 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="24027b59dda46d94a7e2a44f624ddff046a8eb2c97a011a50b8c8d2955a5f46d" exitCode=0 Mar 08 03:30:32.874352 master-0 kubenswrapper[7387]: I0308 03:30:32.872262 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerDied","Data":"24027b59dda46d94a7e2a44f624ddff046a8eb2c97a011a50b8c8d2955a5f46d"} Mar 08 03:30:32.874352 master-0 kubenswrapper[7387]: I0308 03:30:32.872646 7387 scope.go:117] "RemoveContainer" containerID="24027b59dda46d94a7e2a44f624ddff046a8eb2c97a011a50b8c8d2955a5f46d" Mar 08 03:30:33.274661 master-0 kubenswrapper[7387]: I0308 03:30:33.274621 7387 scope.go:117] "RemoveContainer" containerID="1d5204ce567ac69cf82074daeb2d6d762b5dea3e2e48fc87e314063a45817203" Mar 08 03:30:33.301583 master-0 kubenswrapper[7387]: I0308 03:30:33.301534 7387 scope.go:117] "RemoveContainer" containerID="c6227c869f9005e95f446273c65ad19705819a8f1fec09ed23d91f2253df5b7d" Mar 08 03:30:33.303050 master-0 kubenswrapper[7387]: I0308 03:30:33.303016 7387 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:33.340534 master-0 kubenswrapper[7387]: I0308 03:30:33.340478 7387 scope.go:117] "RemoveContainer" containerID="baddc749e42f097718aa35b36ad713f89e081e60f5274e4f8ef3d143389a47d9" Mar 08 03:30:33.599814 master-0 kubenswrapper[7387]: I0308 03:30:33.599719 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:33.599814 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:33.599814 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:33.599814 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:33.601675 master-0 kubenswrapper[7387]: I0308 03:30:33.599821 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:33.661030 master-0 kubenswrapper[7387]: I0308 03:30:33.660975 7387 scope.go:117] "RemoveContainer" containerID="72b1351e9a3c52004d63474cc4899d00eb9ec35191bb77729c1e4a2c5db91758" Mar 08 03:30:33.848182 master-0 kubenswrapper[7387]: I0308 03:30:33.848058 7387 scope.go:117] "RemoveContainer" containerID="be3f100eb7ee4d7b6f435b1a7bf70e291908c984ecfe21da6d4b4fe3a36ab5f2" Mar 08 03:30:33.891757 master-0 kubenswrapper[7387]: I0308 03:30:33.891710 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" event={"ID":"81abc17a-8a51-44e2-a5df-5ddb394a9fa6","Type":"ContainerStarted","Data":"c240ada2e27203664be3266bb9afe3cbe863c239624ff40eec94d8593824af46"} Mar 08 03:30:33.901991 master-0 kubenswrapper[7387]: I0308 03:30:33.901944 7387 scope.go:117] "RemoveContainer" containerID="b009862d75dae9f3e9089264c59ffc33de04ddd735304db6fbfcc002f9536734" Mar 08 03:30:33.922392 master-0 kubenswrapper[7387]: I0308 03:30:33.922237 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" event={"ID":"42b9f2d1-da5c-46b5-b131-d206fa37d436","Type":"ContainerStarted","Data":"e2acb0859f9c748d93ab0aee3200cb37eb9b03ca6a78e84eee21d89ba65ca4e6"} Mar 08 03:30:33.944657 master-0 kubenswrapper[7387]: I0308 03:30:33.944606 7387 scope.go:117] "RemoveContainer" containerID="817f432c51c661f9dc4a70152616d33f0d5d8c245d1f7dbc4c3905c7f6f13361" Mar 08 03:30:33.990462 master-0 kubenswrapper[7387]: E0308 03:30:33.990060 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:34.599849 master-0 kubenswrapper[7387]: I0308 03:30:34.599769 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:34.599849 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:34.599849 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:34.599849 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:34.599849 master-0 kubenswrapper[7387]: I0308 03:30:34.599836 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:34.639623 master-0 kubenswrapper[7387]: I0308 03:30:34.639543 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:34.946440 master-0 kubenswrapper[7387]: I0308 03:30:34.946256 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" event={"ID":"2a506cf6-bc39-4089-9caa-4c14c4d15c11","Type":"ContainerStarted","Data":"e3f49a2d725e5cf30438f10ea0645c747f706caaacfd0d4ff5853f91aa9e7cba"} Mar 08 03:30:34.949645 master-0 kubenswrapper[7387]: I0308 03:30:34.949465 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" event={"ID":"2468d2a3-ec65-4888-a86a-3f66fa311f56","Type":"ContainerStarted","Data":"6d872732014dac9194bf04c490a8845c7740960be98ce910cd2402378447a591"} Mar 08 03:30:34.955366 master-0 kubenswrapper[7387]: I0308 03:30:34.955298 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" event={"ID":"965f8eef-c5af-499b-b1db-cf63072781cc","Type":"ContainerStarted","Data":"1ec0a9a6a2a70e7738784d857b11a00cdc6097dd2b89da3fb0fe8b07a42c4df6"} Mar 08 03:30:34.959880 master-0 kubenswrapper[7387]: I0308 03:30:34.959823 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" event={"ID":"4711e21f-da6d-47ee-8722-64663e05de10","Type":"ContainerStarted","Data":"246d6769acb25da53bd54a72fe90f2473ba9699859c554f56a49877eea8a3bbc"} Mar 08 03:30:34.964114 master-0 kubenswrapper[7387]: I0308 03:30:34.964012 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" event={"ID":"bd1bcaff-7dbd-4559-92fc-5453993f643e","Type":"ContainerStarted","Data":"6a04c7bae70f71fda4392d355ed44177c970fa22ef15dc25a22bf21d938d3e38"} Mar 08 03:30:34.965665 master-0 kubenswrapper[7387]: I0308 03:30:34.965625 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:30:34.967752 master-0 kubenswrapper[7387]: I0308 03:30:34.967708 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"22f31e2b7f0321897dacca58338ef528e1d06507bc628197034c61c7576b258f"} Mar 08 03:30:34.968711 master-0 kubenswrapper[7387]: I0308 03:30:34.968663 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:34.969100 master-0 kubenswrapper[7387]: E0308 03:30:34.969059 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:34.972593 master-0 kubenswrapper[7387]: I0308 03:30:34.972535 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" event={"ID":"7af634f0-65ac-402a-acd6-a8aad11b37ab","Type":"ContainerStarted","Data":"757dcba4ad6136d3dfe2c473fc8298fe1d538796b0d951d868e9d000ec6f1b81"} Mar 08 03:30:34.982248 master-0 kubenswrapper[7387]: I0308 03:30:34.982172 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" event={"ID":"1d446527-f3fd-4a37-a980-7445031928d1","Type":"ContainerStarted","Data":"c2856c6990992566a316fb4d8dce1f2326a521bcd1c2d74c5b4a80a8718dbccb"} Mar 08 03:30:34.985810 master-0 kubenswrapper[7387]: I0308 03:30:34.985743 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/0.log" Mar 08 03:30:34.986675 master-0 kubenswrapper[7387]: I0308 03:30:34.986617 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" event={"ID":"2ffe00fd-6834-4a5b-8b0b-b467d284f23c","Type":"ContainerStarted","Data":"fd836d2e75af4c194aa26436c339163d91bc0da2d73d6aca2e385aa21392d868"} Mar 08 03:30:34.993587 master-0 kubenswrapper[7387]: I0308 03:30:34.993524 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" event={"ID":"5a058138-8039-4841-821b-7ee5bb8648e4","Type":"ContainerStarted","Data":"a1505e8a0a0c14419f12b2bd193a1c448183aca7abfbdd0023312e4504e44eae"} Mar 08 03:30:35.600288 master-0 kubenswrapper[7387]: I0308 03:30:35.600208 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:35.600288 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:35.600288 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:35.600288 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:35.601335 master-0 kubenswrapper[7387]: I0308 03:30:35.600310 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:36.003266 master-0 kubenswrapper[7387]: I0308 03:30:36.003192 7387 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-d4wnv container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 08 03:30:36.003266 master-0 kubenswrapper[7387]: I0308 03:30:36.003218 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:36.003629 master-0 kubenswrapper[7387]: I0308 03:30:36.003268 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" podUID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 08 03:30:36.003870 master-0 kubenswrapper[7387]: E0308 03:30:36.003836 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:36.450139 master-0 kubenswrapper[7387]: I0308 03:30:36.450035 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:36.598235 master-0 kubenswrapper[7387]: I0308 03:30:36.598189 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:36.598235 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:36.598235 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:36.598235 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:36.598621 master-0 kubenswrapper[7387]: I0308 03:30:36.598594 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:37.008408 master-0 kubenswrapper[7387]: I0308 03:30:37.008354 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:37.008968 master-0 kubenswrapper[7387]: E0308 03:30:37.008614 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:37.581556 master-0 kubenswrapper[7387]: I0308 03:30:37.581517 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:30:37.598694 master-0 kubenswrapper[7387]: I0308 03:30:37.598619 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:37.598694 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:37.598694 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:37.598694 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:37.599014 master-0 kubenswrapper[7387]: I0308 03:30:37.598700 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:38.386855 master-0 kubenswrapper[7387]: I0308 03:30:38.386692 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:30:38.599589 master-0 kubenswrapper[7387]: I0308 03:30:38.599471 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:38.599589 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:38.599589 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:38.599589 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:38.599589 master-0 kubenswrapper[7387]: I0308 03:30:38.599576 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:39.598258 master-0 kubenswrapper[7387]: I0308 03:30:39.598193 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:39.598258 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:39.598258 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:39.598258 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:39.598932 master-0 kubenswrapper[7387]: I0308 03:30:39.598260 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:40.599675 master-0 kubenswrapper[7387]: I0308 03:30:40.599594 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:40.599675 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:40.599675 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:40.599675 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:40.600711 master-0 kubenswrapper[7387]: I0308 03:30:40.599707 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:41.599423 master-0 kubenswrapper[7387]: I0308 03:30:41.599329 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:41.599423 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:41.599423 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:41.599423 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:41.600657 master-0 kubenswrapper[7387]: I0308 03:30:41.599433 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:41.760098 master-0 kubenswrapper[7387]: I0308 03:30:41.760023 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:30:41.760477 master-0 kubenswrapper[7387]: E0308 03:30:41.760424 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:30:42.598675 master-0 kubenswrapper[7387]: I0308 03:30:42.598557 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:42.598675 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:42.598675 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:42.598675 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:42.598675 master-0 kubenswrapper[7387]: I0308 03:30:42.598674 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:43.599497 master-0 kubenswrapper[7387]: I0308 03:30:43.599409 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:43.599497 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:43.599497 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:43.599497 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:43.600538 master-0 kubenswrapper[7387]: I0308 03:30:43.599497 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:44.599180 master-0 kubenswrapper[7387]: I0308 03:30:44.599084 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:44.599180 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:44.599180 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:44.599180 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:44.599180 master-0 kubenswrapper[7387]: I0308 03:30:44.599166 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:44.639387 master-0 kubenswrapper[7387]: I0308 03:30:44.639324 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:44.640234 master-0 kubenswrapper[7387]: I0308 03:30:44.640186 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:44.640558 master-0 kubenswrapper[7387]: E0308 03:30:44.640513 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:44.647894 master-0 kubenswrapper[7387]: I0308 03:30:44.647846 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:45.068809 master-0 kubenswrapper[7387]: I0308 03:30:45.068731 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:45.069212 master-0 kubenswrapper[7387]: E0308 03:30:45.069154 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:45.075876 master-0 kubenswrapper[7387]: I0308 03:30:45.075767 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:45.283587 master-0 kubenswrapper[7387]: I0308 03:30:45.283510 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 08 03:30:45.284789 master-0 kubenswrapper[7387]: I0308 03:30:45.284743 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.289010 master-0 kubenswrapper[7387]: I0308 03:30:45.287889 7387 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sglg6" Mar 08 03:30:45.289292 master-0 kubenswrapper[7387]: I0308 03:30:45.289032 7387 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:30:45.315037 master-0 kubenswrapper[7387]: I0308 03:30:45.314979 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 08 03:30:45.404872 master-0 kubenswrapper[7387]: I0308 03:30:45.404736 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.404872 master-0 kubenswrapper[7387]: I0308 03:30:45.404803 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.404872 master-0 kubenswrapper[7387]: I0308 03:30:45.404830 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.506840 master-0 kubenswrapper[7387]: I0308 03:30:45.506767 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.507061 master-0 kubenswrapper[7387]: I0308 03:30:45.506886 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.507115 master-0 kubenswrapper[7387]: I0308 03:30:45.507083 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.507187 master-0 kubenswrapper[7387]: I0308 03:30:45.507120 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.507275 master-0 kubenswrapper[7387]: I0308 03:30:45.507234 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.524173 master-0 kubenswrapper[7387]: I0308 03:30:45.524135 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:45.599519 master-0 kubenswrapper[7387]: I0308 03:30:45.599443 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:45.599519 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:45.599519 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:45.599519 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:45.600208 master-0 kubenswrapper[7387]: I0308 03:30:45.599542 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:45.617160 master-0 kubenswrapper[7387]: I0308 03:30:45.617111 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:30:46.079545 master-0 kubenswrapper[7387]: I0308 03:30:46.079445 7387 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:30:46.080148 master-0 kubenswrapper[7387]: E0308 03:30:46.080051 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 08 03:30:46.142307 master-0 kubenswrapper[7387]: I0308 03:30:46.142039 7387 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-retry-1-master-0"] Mar 08 03:30:46.144204 master-0 kubenswrapper[7387]: W0308 03:30:46.144151 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode6716923_7f46_438f_9cc4_c0f071ca5b1a.slice/crio-fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982 WatchSource:0}: Error finding container fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982: Status 404 returned error can't find the container with id fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982 Mar 08 03:30:46.602574 master-0 kubenswrapper[7387]: I0308 03:30:46.602502 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:46.602574 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:46.602574 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:46.602574 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:46.603703 master-0 kubenswrapper[7387]: I0308 03:30:46.602585 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:47.091037 master-0 kubenswrapper[7387]: I0308 03:30:47.090968 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e6716923-7f46-438f-9cc4-c0f071ca5b1a","Type":"ContainerStarted","Data":"c63ef8e2456c825e658d5f608a85868873e2b693945cba943036d87c971f2472"} Mar 08 03:30:47.091037 master-0 kubenswrapper[7387]: I0308 03:30:47.091046 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e6716923-7f46-438f-9cc4-c0f071ca5b1a","Type":"ContainerStarted","Data":"fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982"} Mar 08 03:30:47.116654 master-0 kubenswrapper[7387]: I0308 03:30:47.116552 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" podStartSLOduration=2.116533195 podStartE2EDuration="2.116533195s" podCreationTimestamp="2026-03-08 03:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:30:47.112866883 +0000 UTC m=+1183.507342554" watchObservedRunningTime="2026-03-08 03:30:47.116533195 +0000 UTC m=+1183.511008896" Mar 08 03:30:47.599629 master-0 kubenswrapper[7387]: I0308 03:30:47.599578 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:47.599629 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:47.599629 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:47.599629 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:47.600354 master-0 kubenswrapper[7387]: I0308 03:30:47.600312 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:48.598553 master-0 kubenswrapper[7387]: I0308 03:30:48.598458 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:48.598553 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:48.598553 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:48.598553 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:48.599306 master-0 kubenswrapper[7387]: I0308 03:30:48.598559 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:49.599056 master-0 kubenswrapper[7387]: I0308 03:30:49.598893 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:49.599056 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:49.599056 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:49.599056 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:49.599056 master-0 kubenswrapper[7387]: I0308 03:30:49.599018 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:49.677686 master-0 kubenswrapper[7387]: I0308 03:30:49.677610 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/3.log" Mar 08 03:30:49.868648 master-0 kubenswrapper[7387]: I0308 03:30:49.868496 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/4.log" Mar 08 03:30:50.075423 master-0 kubenswrapper[7387]: I0308 03:30:50.075345 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-tkxj9_e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/router/2.log" Mar 08 03:30:50.264187 master-0 kubenswrapper[7387]: I0308 03:30:50.264102 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/fix-audit-permissions/0.log" Mar 08 03:30:50.477453 master-0 kubenswrapper[7387]: I0308 03:30:50.477358 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/oauth-apiserver/0.log" Mar 08 03:30:50.599268 master-0 kubenswrapper[7387]: I0308 03:30:50.599100 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:50.599268 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:50.599268 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:50.599268 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:50.599268 master-0 kubenswrapper[7387]: I0308 03:30:50.599220 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:50.664921 master-0 kubenswrapper[7387]: I0308 03:30:50.664854 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:30:50.865576 master-0 kubenswrapper[7387]: I0308 03:30:50.865408 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:30:50.969817 master-0 kubenswrapper[7387]: I0308 03:30:50.969729 7387 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 03:30:50.970138 master-0 kubenswrapper[7387]: I0308 03:30:50.970102 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://22f31e2b7f0321897dacca58338ef528e1d06507bc628197034c61c7576b258f" gracePeriod=30 Mar 08 03:30:50.971709 master-0 kubenswrapper[7387]: I0308 03:30:50.971652 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:30:50.972021 master-0 kubenswrapper[7387]: E0308 03:30:50.971987 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972021 master-0 kubenswrapper[7387]: I0308 03:30:50.972008 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972021 master-0 kubenswrapper[7387]: E0308 03:30:50.972021 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: I0308 03:30:50.972030 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: E0308 03:30:50.972046 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: I0308 03:30:50.972054 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: E0308 03:30:50.972068 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: I0308 03:30:50.972076 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: E0308 03:30:50.972089 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: I0308 03:30:50.972099 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: E0308 03:30:50.972114 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972227 master-0 kubenswrapper[7387]: I0308 03:30:50.972123 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972265 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972284 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972298 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972311 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972320 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972335 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972346 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972357 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972365 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: E0308 03:30:50.972494 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972504 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: E0308 03:30:50.972517 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972524 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: E0308 03:30:50.972537 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972544 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: E0308 03:30:50.972571 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972579 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972746 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.972768 master-0 kubenswrapper[7387]: I0308 03:30:50.972759 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:30:50.973853 master-0 kubenswrapper[7387]: E0308 03:30:50.972895 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.973853 master-0 kubenswrapper[7387]: I0308 03:30:50.972921 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:30:50.973853 master-0 kubenswrapper[7387]: I0308 03:30:50.973819 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.071640 master-0 kubenswrapper[7387]: I0308 03:30:51.071392 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/3.log" Mar 08 03:30:51.091495 master-0 kubenswrapper[7387]: I0308 03:30:51.090997 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.091495 master-0 kubenswrapper[7387]: I0308 03:30:51.091418 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.125743 master-0 kubenswrapper[7387]: I0308 03:30:51.125329 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:30:51.137934 master-0 kubenswrapper[7387]: I0308 03:30:51.137814 7387 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="22f31e2b7f0321897dacca58338ef528e1d06507bc628197034c61c7576b258f" exitCode=0 Mar 08 03:30:51.137934 master-0 kubenswrapper[7387]: I0308 03:30:51.137896 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb" Mar 08 03:30:51.138354 master-0 kubenswrapper[7387]: I0308 03:30:51.137956 7387 scope.go:117] "RemoveContainer" containerID="85ebc2aadcc00fbddf926f6ab17ab8c204935ad575ebd07cf7adcfc06b4a6c08" Mar 08 03:30:51.194400 master-0 kubenswrapper[7387]: I0308 03:30:51.193494 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.194400 master-0 kubenswrapper[7387]: I0308 03:30:51.193652 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.194400 master-0 kubenswrapper[7387]: I0308 03:30:51.193815 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.194400 master-0 kubenswrapper[7387]: I0308 03:30:51.193941 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.267883 master-0 kubenswrapper[7387]: I0308 03:30:51.267842 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:51.279990 master-0 kubenswrapper[7387]: I0308 03:30:51.279948 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/4.log" Mar 08 03:30:51.297387 master-0 kubenswrapper[7387]: I0308 03:30:51.296574 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 03:30:51.297387 master-0 kubenswrapper[7387]: I0308 03:30:51.296650 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 03:30:51.297387 master-0 kubenswrapper[7387]: I0308 03:30:51.296778 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 03:30:51.297387 master-0 kubenswrapper[7387]: I0308 03:30:51.296884 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 03:30:51.297387 master-0 kubenswrapper[7387]: I0308 03:30:51.297049 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 08 03:30:51.297829 master-0 kubenswrapper[7387]: I0308 03:30:51.297726 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:51.297829 master-0 kubenswrapper[7387]: I0308 03:30:51.297795 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:51.298002 master-0 kubenswrapper[7387]: I0308 03:30:51.297839 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:51.298002 master-0 kubenswrapper[7387]: I0308 03:30:51.297889 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:51.298002 master-0 kubenswrapper[7387]: I0308 03:30:51.297969 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:51.335287 master-0 kubenswrapper[7387]: I0308 03:30:51.335205 7387 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="9a6480d3-7182-439a-81af-17c2a49c776e" Mar 08 03:30:51.398744 master-0 kubenswrapper[7387]: I0308 03:30:51.398577 7387 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:51.398744 master-0 kubenswrapper[7387]: I0308 03:30:51.398627 7387 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:51.398744 master-0 kubenswrapper[7387]: I0308 03:30:51.398645 7387 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:51.398744 master-0 kubenswrapper[7387]: I0308 03:30:51.398659 7387 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:51.398744 master-0 kubenswrapper[7387]: I0308 03:30:51.398670 7387 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:51.417414 master-0 kubenswrapper[7387]: I0308 03:30:51.417337 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:30:51.454622 master-0 kubenswrapper[7387]: W0308 03:30:51.454542 7387 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c635212a8e9ee60477413d34dfb3c70.slice/crio-981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f WatchSource:0}: Error finding container 981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f: Status 404 returned error can't find the container with id 981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f Mar 08 03:30:51.599817 master-0 kubenswrapper[7387]: I0308 03:30:51.599752 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:51.599817 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:51.599817 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:51.599817 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:51.600809 master-0 kubenswrapper[7387]: I0308 03:30:51.599829 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:51.774444 master-0 kubenswrapper[7387]: I0308 03:30:51.774372 7387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 08 03:30:51.775044 master-0 kubenswrapper[7387]: I0308 03:30:51.775002 7387 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 08 03:30:51.797574 master-0 kubenswrapper[7387]: I0308 03:30:51.797489 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 03:30:51.797574 master-0 kubenswrapper[7387]: I0308 03:30:51.797566 7387 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="9a6480d3-7182-439a-81af-17c2a49c776e" Mar 08 03:30:51.804043 master-0 kubenswrapper[7387]: I0308 03:30:51.803863 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 08 03:30:51.804043 master-0 kubenswrapper[7387]: I0308 03:30:51.804011 7387 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="9a6480d3-7182-439a-81af-17c2a49c776e" Mar 08 03:30:52.153835 master-0 kubenswrapper[7387]: I0308 03:30:52.153777 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} Mar 08 03:30:52.154187 master-0 kubenswrapper[7387]: I0308 03:30:52.154157 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} Mar 08 03:30:52.154336 master-0 kubenswrapper[7387]: I0308 03:30:52.154311 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f"} Mar 08 03:30:52.156672 master-0 kubenswrapper[7387]: I0308 03:30:52.156630 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 08 03:30:52.162100 master-0 kubenswrapper[7387]: I0308 03:30:52.162061 7387 generic.go:334] "Generic (PLEG): container finished" podID="627f0501-8b6a-4bc7-b610-355a0661f385" containerID="39acd779a6b4efc5eaa5408d29d32ff65cfd712c0fbed2aa3652c2244b17d9bc" exitCode=0 Mar 08 03:30:52.162100 master-0 kubenswrapper[7387]: I0308 03:30:52.162096 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"627f0501-8b6a-4bc7-b610-355a0661f385","Type":"ContainerDied","Data":"39acd779a6b4efc5eaa5408d29d32ff65cfd712c0fbed2aa3652c2244b17d9bc"} Mar 08 03:30:52.273956 master-0 kubenswrapper[7387]: I0308 03:30:52.273768 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ddf7d93b-6a73-4de5-b984-cde6fba07b48/installer/0.log" Mar 08 03:30:52.598884 master-0 kubenswrapper[7387]: I0308 03:30:52.598708 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:52.598884 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:52.598884 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:52.598884 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:52.598884 master-0 kubenswrapper[7387]: I0308 03:30:52.598785 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:52.665556 master-0 kubenswrapper[7387]: I0308 03:30:52.665490 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:30:52.760325 master-0 kubenswrapper[7387]: I0308 03:30:52.760258 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:30:52.760539 master-0 kubenswrapper[7387]: E0308 03:30:52.760503 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:30:53.176168 master-0 kubenswrapper[7387]: I0308 03:30:53.176066 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} Mar 08 03:30:53.176168 master-0 kubenswrapper[7387]: I0308 03:30:53.176144 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} Mar 08 03:30:53.222930 master-0 kubenswrapper[7387]: I0308 03:30:53.217285 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.217258778 podStartE2EDuration="2.217258778s" podCreationTimestamp="2026-03-08 03:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:30:53.213167305 +0000 UTC m=+1189.607642996" watchObservedRunningTime="2026-03-08 03:30:53.217258778 +0000 UTC m=+1189.611734479" Mar 08 03:30:53.262881 master-0 kubenswrapper[7387]: I0308 03:30:53.262761 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/wait-for-host-port/0.log" Mar 08 03:30:53.472132 master-0 kubenswrapper[7387]: I0308 03:30:53.471141 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/1.log" Mar 08 03:30:53.501668 master-0 kubenswrapper[7387]: I0308 03:30:53.501179 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:53.533610 master-0 kubenswrapper[7387]: I0308 03:30:53.533521 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir\") pod \"627f0501-8b6a-4bc7-b610-355a0661f385\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " Mar 08 03:30:53.533610 master-0 kubenswrapper[7387]: I0308 03:30:53.533590 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access\") pod \"627f0501-8b6a-4bc7-b610-355a0661f385\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " Mar 08 03:30:53.533610 master-0 kubenswrapper[7387]: I0308 03:30:53.533614 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "627f0501-8b6a-4bc7-b610-355a0661f385" (UID: "627f0501-8b6a-4bc7-b610-355a0661f385"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:53.534138 master-0 kubenswrapper[7387]: I0308 03:30:53.533703 7387 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock\") pod \"627f0501-8b6a-4bc7-b610-355a0661f385\" (UID: \"627f0501-8b6a-4bc7-b610-355a0661f385\") " Mar 08 03:30:53.534138 master-0 kubenswrapper[7387]: I0308 03:30:53.534012 7387 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:53.534138 master-0 kubenswrapper[7387]: I0308 03:30:53.534048 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock" (OuterVolumeSpecName: "var-lock") pod "627f0501-8b6a-4bc7-b610-355a0661f385" (UID: "627f0501-8b6a-4bc7-b610-355a0661f385"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:30:53.536959 master-0 kubenswrapper[7387]: I0308 03:30:53.536865 7387 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "627f0501-8b6a-4bc7-b610-355a0661f385" (UID: "627f0501-8b6a-4bc7-b610-355a0661f385"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:30:53.599094 master-0 kubenswrapper[7387]: I0308 03:30:53.599020 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:53.599094 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:53.599094 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:53.599094 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:53.599094 master-0 kubenswrapper[7387]: I0308 03:30:53.599091 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:53.635248 master-0 kubenswrapper[7387]: I0308 03:30:53.635166 7387 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/627f0501-8b6a-4bc7-b610-355a0661f385-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:53.635248 master-0 kubenswrapper[7387]: I0308 03:30:53.635240 7387 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/627f0501-8b6a-4bc7-b610-355a0661f385-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:30:53.667390 master-0 kubenswrapper[7387]: I0308 03:30:53.667330 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 08 03:30:53.866440 master-0 kubenswrapper[7387]: I0308 03:30:53.865634 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-recovery-controller/0.log" Mar 08 03:30:54.071755 master-0 kubenswrapper[7387]: I0308 03:30:54.071564 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/3.log" Mar 08 03:30:54.185157 master-0 kubenswrapper[7387]: I0308 03:30:54.185064 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"627f0501-8b6a-4bc7-b610-355a0661f385","Type":"ContainerDied","Data":"b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e"} Mar 08 03:30:54.185419 master-0 kubenswrapper[7387]: I0308 03:30:54.185169 7387 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e" Mar 08 03:30:54.185419 master-0 kubenswrapper[7387]: I0308 03:30:54.185095 7387 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:30:54.265361 master-0 kubenswrapper[7387]: I0308 03:30:54.265274 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/4.log" Mar 08 03:30:54.469385 master-0 kubenswrapper[7387]: I0308 03:30:54.469255 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/3.log" Mar 08 03:30:54.598840 master-0 kubenswrapper[7387]: I0308 03:30:54.598790 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:54.598840 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:54.598840 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:54.598840 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:54.599114 master-0 kubenswrapper[7387]: I0308 03:30:54.598862 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:54.677091 master-0 kubenswrapper[7387]: I0308 03:30:54.677036 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-gstfr_2a506cf6-bc39-4089-9caa-4c14c4d15c11/openshift-apiserver-operator/4.log" Mar 08 03:30:54.864777 master-0 kubenswrapper[7387]: I0308 03:30:54.864696 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/fix-audit-permissions/0.log" Mar 08 03:30:55.072875 master-0 kubenswrapper[7387]: I0308 03:30:55.072796 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/openshift-apiserver/0.log" Mar 08 03:30:55.267848 master-0 kubenswrapper[7387]: I0308 03:30:55.267790 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-5bf974f84f-hzx44_f2057f75-159d-4416-a234-050f0fe1afc9/openshift-apiserver-check-endpoints/0.log" Mar 08 03:30:55.471503 master-0 kubenswrapper[7387]: I0308 03:30:55.471428 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/4.log" Mar 08 03:30:55.600558 master-0 kubenswrapper[7387]: I0308 03:30:55.600422 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:55.600558 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:55.600558 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:55.600558 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:55.600558 master-0 kubenswrapper[7387]: I0308 03:30:55.600509 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:55.666154 master-0 kubenswrapper[7387]: I0308 03:30:55.666094 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/5.log" Mar 08 03:30:55.870148 master-0 kubenswrapper[7387]: I0308 03:30:55.869963 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-wsswx_5a92a557-d023-4531-b3a3-e559af0fe358/catalog-operator/0.log" Mar 08 03:30:56.070786 master-0 kubenswrapper[7387]: I0308 03:30:56.070704 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-t659n_d68278f6-59d5-4bbf-b969-e47635ffd4cc/olm-operator/0.log" Mar 08 03:30:56.468728 master-0 kubenswrapper[7387]: I0308 03:30:56.468660 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:30:56.599873 master-0 kubenswrapper[7387]: I0308 03:30:56.599779 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:56.599873 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:56.599873 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:56.599873 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:56.600543 master-0 kubenswrapper[7387]: I0308 03:30:56.599891 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:56.665489 master-0 kubenswrapper[7387]: I0308 03:30:56.665112 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/kube-rbac-proxy/0.log" Mar 08 03:30:56.865575 master-0 kubenswrapper[7387]: I0308 03:30:56.865472 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/2.log" Mar 08 03:30:57.068556 master-0 kubenswrapper[7387]: I0308 03:30:57.068497 7387 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-7fcc847fc6-s2lnw_7a1b7b0d-6e00-485e-86e8-7bd047569328/packageserver/0.log" Mar 08 03:30:57.599335 master-0 kubenswrapper[7387]: I0308 03:30:57.599264 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:57.599335 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:57.599335 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:57.599335 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:57.599601 master-0 kubenswrapper[7387]: I0308 03:30:57.599360 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:58.599094 master-0 kubenswrapper[7387]: I0308 03:30:58.598998 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:58.599094 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:58.599094 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:58.599094 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:58.600210 master-0 kubenswrapper[7387]: I0308 03:30:58.599096 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:30:59.598759 master-0 kubenswrapper[7387]: I0308 03:30:59.598583 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:30:59.598759 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:30:59.598759 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:30:59.598759 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:30:59.598759 master-0 kubenswrapper[7387]: I0308 03:30:59.598681 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:00.599599 master-0 kubenswrapper[7387]: I0308 03:31:00.599502 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:00.599599 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:00.599599 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:00.599599 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:00.600547 master-0 kubenswrapper[7387]: I0308 03:31:00.599613 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:01.418331 master-0 kubenswrapper[7387]: I0308 03:31:01.418223 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.418683 master-0 kubenswrapper[7387]: I0308 03:31:01.418488 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.418683 master-0 kubenswrapper[7387]: I0308 03:31:01.418537 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.418683 master-0 kubenswrapper[7387]: I0308 03:31:01.418566 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.425390 master-0 kubenswrapper[7387]: I0308 03:31:01.425316 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.426973 master-0 kubenswrapper[7387]: I0308 03:31:01.426867 7387 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:01.599808 master-0 kubenswrapper[7387]: I0308 03:31:01.599669 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:01.599808 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:01.599808 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:01.599808 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:01.600869 master-0 kubenswrapper[7387]: I0308 03:31:01.599819 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:02.269376 master-0 kubenswrapper[7387]: I0308 03:31:02.269286 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:02.270990 master-0 kubenswrapper[7387]: I0308 03:31:02.270543 7387 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:02.599503 master-0 kubenswrapper[7387]: I0308 03:31:02.599361 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:02.599503 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:02.599503 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:02.599503 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:02.599503 master-0 kubenswrapper[7387]: I0308 03:31:02.599450 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:03.599179 master-0 kubenswrapper[7387]: I0308 03:31:03.599094 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:03.599179 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:03.599179 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:03.599179 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:03.599639 master-0 kubenswrapper[7387]: I0308 03:31:03.599188 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:04.598986 master-0 kubenswrapper[7387]: I0308 03:31:04.598923 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:04.598986 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:04.598986 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:04.598986 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:04.598986 master-0 kubenswrapper[7387]: I0308 03:31:04.598982 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:05.600147 master-0 kubenswrapper[7387]: I0308 03:31:05.600051 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:05.600147 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:05.600147 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:05.600147 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:05.601621 master-0 kubenswrapper[7387]: I0308 03:31:05.600159 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:06.598943 master-0 kubenswrapper[7387]: I0308 03:31:06.598818 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:06.598943 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:06.598943 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:06.598943 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:06.599536 master-0 kubenswrapper[7387]: I0308 03:31:06.598965 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:06.759711 master-0 kubenswrapper[7387]: I0308 03:31:06.759651 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:31:06.760760 master-0 kubenswrapper[7387]: E0308 03:31:06.760707 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:31:07.599803 master-0 kubenswrapper[7387]: I0308 03:31:07.599669 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:07.599803 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:07.599803 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:07.599803 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:07.600292 master-0 kubenswrapper[7387]: I0308 03:31:07.599797 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:08.599548 master-0 kubenswrapper[7387]: I0308 03:31:08.599457 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:08.599548 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:08.599548 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:08.599548 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:08.600091 master-0 kubenswrapper[7387]: I0308 03:31:08.599566 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:09.600684 master-0 kubenswrapper[7387]: I0308 03:31:09.600573 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:09.600684 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:09.600684 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:09.600684 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:09.601752 master-0 kubenswrapper[7387]: I0308 03:31:09.600718 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:10.601481 master-0 kubenswrapper[7387]: I0308 03:31:10.601403 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:10.601481 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:10.601481 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:10.601481 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:10.602685 master-0 kubenswrapper[7387]: I0308 03:31:10.601504 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:11.599034 master-0 kubenswrapper[7387]: I0308 03:31:11.598974 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:11.599034 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:11.599034 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:11.599034 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:11.599376 master-0 kubenswrapper[7387]: I0308 03:31:11.599037 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:12.599711 master-0 kubenswrapper[7387]: I0308 03:31:12.599630 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:12.599711 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:12.599711 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:12.599711 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:12.599711 master-0 kubenswrapper[7387]: I0308 03:31:12.599709 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:13.599098 master-0 kubenswrapper[7387]: I0308 03:31:13.598999 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:13.599098 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:13.599098 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:13.599098 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:13.599098 master-0 kubenswrapper[7387]: I0308 03:31:13.599078 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:13.765922 master-0 kubenswrapper[7387]: I0308 03:31:13.765815 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:31:13.765922 master-0 kubenswrapper[7387]: I0308 03:31:13.765879 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:31:13.790124 master-0 kubenswrapper[7387]: I0308 03:31:13.790050 7387 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:13.792503 master-0 kubenswrapper[7387]: I0308 03:31:13.792447 7387 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:31:13.798951 master-0 kubenswrapper[7387]: I0308 03:31:13.798850 7387 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:31:13.843553 master-0 kubenswrapper[7387]: I0308 03:31:13.843471 7387 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 08 03:31:14.379696 master-0 kubenswrapper[7387]: I0308 03:31:14.379618 7387 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:31:14.379696 master-0 kubenswrapper[7387]: I0308 03:31:14.379669 7387 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="f56d0a85-fd9c-4bfb-9912-27fbe89b0adb" Mar 08 03:31:14.599433 master-0 kubenswrapper[7387]: I0308 03:31:14.599369 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:14.599433 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:14.599433 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:14.599433 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:14.599742 master-0 kubenswrapper[7387]: I0308 03:31:14.599457 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:15.599453 master-0 kubenswrapper[7387]: I0308 03:31:15.599395 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:15.599453 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:15.599453 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:15.599453 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:15.600018 master-0 kubenswrapper[7387]: I0308 03:31:15.599466 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:16.599611 master-0 kubenswrapper[7387]: I0308 03:31:16.599506 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:16.599611 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:16.599611 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:16.599611 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:16.600652 master-0 kubenswrapper[7387]: I0308 03:31:16.599617 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:17.598790 master-0 kubenswrapper[7387]: I0308 03:31:17.598701 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:17.598790 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:17.598790 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:17.598790 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:17.599285 master-0 kubenswrapper[7387]: I0308 03:31:17.598793 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:17.759882 master-0 kubenswrapper[7387]: I0308 03:31:17.759772 7387 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:31:17.760782 master-0 kubenswrapper[7387]: E0308 03:31:17.760162 7387 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-kfmd9_openshift-cluster-storage-operator(9fb588a9-6240-4513-8e4b-248eb43d3f06)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" podUID="9fb588a9-6240-4513-8e4b-248eb43d3f06" Mar 08 03:31:18.599762 master-0 kubenswrapper[7387]: I0308 03:31:18.599673 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:18.599762 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:18.599762 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:18.599762 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:18.600096 master-0 kubenswrapper[7387]: I0308 03:31:18.599774 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:19.599999 master-0 kubenswrapper[7387]: I0308 03:31:19.599884 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:19.599999 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:19.599999 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:19.599999 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:19.601032 master-0 kubenswrapper[7387]: I0308 03:31:19.600025 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:20.599571 master-0 kubenswrapper[7387]: I0308 03:31:20.599475 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:20.599571 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:20.599571 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:20.599571 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:20.599571 master-0 kubenswrapper[7387]: I0308 03:31:20.599565 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:21.600947 master-0 kubenswrapper[7387]: I0308 03:31:21.600815 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:21.600947 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:21.600947 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:21.600947 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:21.602352 master-0 kubenswrapper[7387]: I0308 03:31:21.600964 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:22.599735 master-0 kubenswrapper[7387]: I0308 03:31:22.599642 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:22.599735 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:22.599735 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:22.599735 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:22.600215 master-0 kubenswrapper[7387]: I0308 03:31:22.599760 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:23.599515 master-0 kubenswrapper[7387]: I0308 03:31:23.599434 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:23.599515 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:23.599515 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:23.599515 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:23.600733 master-0 kubenswrapper[7387]: I0308 03:31:23.599536 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:23.825900 master-0 kubenswrapper[7387]: I0308 03:31:23.825773 7387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=10.825737297 podStartE2EDuration="10.825737297s" podCreationTimestamp="2026-03-08 03:31:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:31:23.813305296 +0000 UTC m=+1220.207781017" watchObservedRunningTime="2026-03-08 03:31:23.825737297 +0000 UTC m=+1220.220213018" Mar 08 03:31:24.599246 master-0 kubenswrapper[7387]: I0308 03:31:24.599147 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:24.599246 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:24.599246 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:24.599246 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:24.599551 master-0 kubenswrapper[7387]: I0308 03:31:24.599278 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:24.674608 master-0 kubenswrapper[7387]: I0308 03:31:24.674526 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:31:24.674986 master-0 kubenswrapper[7387]: E0308 03:31:24.674956 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:24.675050 master-0 kubenswrapper[7387]: I0308 03:31:24.674984 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:24.675300 master-0 kubenswrapper[7387]: I0308 03:31:24.675268 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:24.675924 master-0 kubenswrapper[7387]: I0308 03:31:24.675876 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.676069 master-0 kubenswrapper[7387]: I0308 03:31:24.675997 7387 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 03:31:24.676511 master-0 kubenswrapper[7387]: I0308 03:31:24.676447 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://bf4fabb9c08963210bf1f0d197a394d399879939569bdcc3789dd4b487cec36f" gracePeriod=15 Mar 08 03:31:24.676763 master-0 kubenswrapper[7387]: I0308 03:31:24.676632 7387 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d" gracePeriod=15 Mar 08 03:31:24.679352 master-0 kubenswrapper[7387]: I0308 03:31:24.679263 7387 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:31:24.680188 master-0 kubenswrapper[7387]: E0308 03:31:24.680143 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:24.680256 master-0 kubenswrapper[7387]: I0308 03:31:24.680189 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:24.682110 master-0 kubenswrapper[7387]: E0308 03:31:24.680290 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:24.682199 master-0 kubenswrapper[7387]: I0308 03:31:24.682171 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:24.682468 master-0 kubenswrapper[7387]: E0308 03:31:24.682423 7387 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:24.682549 master-0 kubenswrapper[7387]: I0308 03:31:24.682510 7387 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:24.683371 master-0 kubenswrapper[7387]: I0308 03:31:24.683285 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:24.683439 master-0 kubenswrapper[7387]: I0308 03:31:24.683383 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:24.683561 master-0 kubenswrapper[7387]: I0308 03:31:24.683521 7387 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:24.688551 master-0 kubenswrapper[7387]: I0308 03:31:24.688465 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.784212 master-0 kubenswrapper[7387]: E0308 03:31:24.779742 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.837224 master-0 kubenswrapper[7387]: I0308 03:31:24.837182 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.837504 master-0 kubenswrapper[7387]: I0308 03:31:24.837478 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.837632 master-0 kubenswrapper[7387]: I0308 03:31:24.837614 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.837758 master-0 kubenswrapper[7387]: I0308 03:31:24.837742 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.837938 master-0 kubenswrapper[7387]: I0308 03:31:24.837858 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.838080 master-0 kubenswrapper[7387]: I0308 03:31:24.838058 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.838193 master-0 kubenswrapper[7387]: I0308 03:31:24.838176 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.838306 master-0 kubenswrapper[7387]: I0308 03:31:24.838289 7387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.940704 master-0 kubenswrapper[7387]: I0308 03:31:24.940630 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.940922 master-0 kubenswrapper[7387]: I0308 03:31:24.940735 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.940922 master-0 kubenswrapper[7387]: I0308 03:31:24.940763 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.940922 master-0 kubenswrapper[7387]: I0308 03:31:24.940783 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.940922 master-0 kubenswrapper[7387]: I0308 03:31:24.940862 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.940922 master-0 kubenswrapper[7387]: I0308 03:31:24.940892 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.941128 master-0 kubenswrapper[7387]: I0308 03:31:24.940999 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.941128 master-0 kubenswrapper[7387]: I0308 03:31:24.940998 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.941128 master-0 kubenswrapper[7387]: I0308 03:31:24.941084 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.941242 master-0 kubenswrapper[7387]: I0308 03:31:24.941145 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.941242 master-0 kubenswrapper[7387]: I0308 03:31:24.941197 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.941323 master-0 kubenswrapper[7387]: I0308 03:31:24.941247 7387 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.941416 master-0 kubenswrapper[7387]: I0308 03:31:24.941343 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.941416 master-0 kubenswrapper[7387]: I0308 03:31:24.941403 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:24.941501 master-0 kubenswrapper[7387]: I0308 03:31:24.941369 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:24.941501 master-0 kubenswrapper[7387]: I0308 03:31:24.941447 7387 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:25.082029 master-0 kubenswrapper[7387]: I0308 03:31:25.081814 7387 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:25.109196 master-0 kubenswrapper[7387]: E0308 03:31:25.108980 7387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189ac028dfad67a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:077dd10388b9e3e48a07382126e86621,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:31:25.107619749 +0000 UTC m=+1221.502095440,LastTimestamp:2026-03-08 03:31:25.107619749 +0000 UTC m=+1221.502095440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:31:25.285301 master-0 kubenswrapper[7387]: I0308 03:31:25.285260 7387 patch_prober.go:28] interesting pod/bootstrap-kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" start-of-body= Mar 08 03:31:25.285484 master-0 kubenswrapper[7387]: I0308 03:31:25.285308 7387 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.32.10:6443/readyz\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:31:25.484023 master-0 kubenswrapper[7387]: I0308 03:31:25.483963 7387 generic.go:334] "Generic (PLEG): container finished" podID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" containerID="c63ef8e2456c825e658d5f608a85868873e2b693945cba943036d87c971f2472" exitCode=0 Mar 08 03:31:25.484636 master-0 kubenswrapper[7387]: I0308 03:31:25.484044 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e6716923-7f46-438f-9cc4-c0f071ca5b1a","Type":"ContainerDied","Data":"c63ef8e2456c825e658d5f608a85868873e2b693945cba943036d87c971f2472"} Mar 08 03:31:25.486350 master-0 kubenswrapper[7387]: I0308 03:31:25.486285 7387 status_manager.go:851] "Failed to get status for pod" podUID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:31:25.487362 master-0 kubenswrapper[7387]: I0308 03:31:25.487291 7387 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867" exitCode=0 Mar 08 03:31:25.487469 master-0 kubenswrapper[7387]: I0308 03:31:25.487424 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867"} Mar 08 03:31:25.487552 master-0 kubenswrapper[7387]: I0308 03:31:25.487477 7387 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"708fff129dc113f73aa37f475b4ae4bc5c5913ac215686fbff11aa81a810bb5e"} Mar 08 03:31:25.489030 master-0 kubenswrapper[7387]: E0308 03:31:25.488965 7387 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:25.489189 master-0 kubenswrapper[7387]: I0308 03:31:25.489038 7387 status_manager.go:851] "Failed to get status for pod" podUID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:31:25.492748 master-0 kubenswrapper[7387]: I0308 03:31:25.492674 7387 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d" exitCode=0 Mar 08 03:31:25.599736 master-0 kubenswrapper[7387]: I0308 03:31:25.599660 7387 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-tkxj9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 03:31:25.599736 master-0 kubenswrapper[7387]: [-]has-synced failed: reason withheld Mar 08 03:31:25.599736 master-0 kubenswrapper[7387]: [+]process-running ok Mar 08 03:31:25.599736 master-0 kubenswrapper[7387]: healthz check failed Mar 08 03:31:25.601382 master-0 kubenswrapper[7387]: I0308 03:31:25.601321 7387 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" podUID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 03:31:26.000648 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 08 03:31:26.019074 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 08 03:31:26.019359 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 08 03:31:26.020371 master-0 systemd[1]: kubelet.service: Consumed 2min 46.161s CPU time. Mar 08 03:31:26.054796 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 03:31:26.175294 master-0 kubenswrapper[33141]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 03:31:26.176078 master-0 kubenswrapper[33141]: I0308 03:31:26.175401 33141 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 03:31:26.178380 master-0 kubenswrapper[33141]: W0308 03:31:26.178339 33141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:31:26.178380 master-0 kubenswrapper[33141]: W0308 03:31:26.178366 33141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:31:26.178380 master-0 kubenswrapper[33141]: W0308 03:31:26.178374 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:31:26.178380 master-0 kubenswrapper[33141]: W0308 03:31:26.178380 33141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:31:26.178380 master-0 kubenswrapper[33141]: W0308 03:31:26.178386 33141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178392 33141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178398 33141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178404 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178409 33141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178415 33141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178421 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178426 33141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178431 33141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178437 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178441 33141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178447 33141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178452 33141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178464 33141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178470 33141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178476 33141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178482 33141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178489 33141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178496 33141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178503 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:31:26.179135 master-0 kubenswrapper[33141]: W0308 03:31:26.178509 33141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178515 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178523 33141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178531 33141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178537 33141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178544 33141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178551 33141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178556 33141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178561 33141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178566 33141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178572 33141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178577 33141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178581 33141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178586 33141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178591 33141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178597 33141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178603 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178608 33141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178612 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178618 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:31:26.180006 master-0 kubenswrapper[33141]: W0308 03:31:26.178623 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178628 33141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178632 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178638 33141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178643 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178650 33141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178656 33141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178662 33141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178667 33141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178677 33141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178684 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178690 33141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178695 33141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178702 33141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178707 33141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178713 33141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178718 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178723 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178729 33141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:31:26.181353 master-0 kubenswrapper[33141]: W0308 03:31:26.178734 33141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178740 33141 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178746 33141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178751 33141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178756 33141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178761 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178766 33141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178771 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: W0308 03:31:26.178776 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.178938 33141 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.178975 33141 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.178987 33141 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.178996 33141 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179005 33141 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179013 33141 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179023 33141 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179032 33141 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179040 33141 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179047 33141 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179055 33141 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179065 33141 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179073 33141 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179081 33141 flags.go:64] FLAG: --cgroup-root="" Mar 08 03:31:26.182058 master-0 kubenswrapper[33141]: I0308 03:31:26.179088 33141 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179095 33141 flags.go:64] FLAG: --client-ca-file="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179102 33141 flags.go:64] FLAG: --cloud-config="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179111 33141 flags.go:64] FLAG: --cloud-provider="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179118 33141 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179127 33141 flags.go:64] FLAG: --cluster-domain="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179133 33141 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179142 33141 flags.go:64] FLAG: --config-dir="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179150 33141 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179158 33141 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179167 33141 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179174 33141 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179182 33141 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179213 33141 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179220 33141 flags.go:64] FLAG: --contention-profiling="false" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179228 33141 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179276 33141 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179287 33141 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179294 33141 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179305 33141 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179313 33141 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179320 33141 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179327 33141 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179335 33141 flags.go:64] FLAG: --enable-server="true" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179343 33141 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 03:31:26.182924 master-0 kubenswrapper[33141]: I0308 03:31:26.179355 33141 flags.go:64] FLAG: --event-burst="100" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179363 33141 flags.go:64] FLAG: --event-qps="50" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179370 33141 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179377 33141 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179385 33141 flags.go:64] FLAG: --eviction-hard="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179394 33141 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179402 33141 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179409 33141 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179417 33141 flags.go:64] FLAG: --eviction-soft="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179425 33141 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179432 33141 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179439 33141 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179445 33141 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179450 33141 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179457 33141 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179462 33141 flags.go:64] FLAG: --feature-gates="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179468 33141 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179474 33141 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179479 33141 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179485 33141 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179490 33141 flags.go:64] FLAG: --healthz-port="10248" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179496 33141 flags.go:64] FLAG: --help="false" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179501 33141 flags.go:64] FLAG: --hostname-override="" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179506 33141 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179512 33141 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 03:31:26.183967 master-0 kubenswrapper[33141]: I0308 03:31:26.179517 33141 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179522 33141 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179528 33141 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179533 33141 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179538 33141 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179543 33141 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179548 33141 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179554 33141 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179560 33141 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179565 33141 flags.go:64] FLAG: --kube-reserved="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179570 33141 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179575 33141 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179580 33141 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179585 33141 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179591 33141 flags.go:64] FLAG: --lock-file="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179596 33141 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179602 33141 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179607 33141 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179615 33141 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179620 33141 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179625 33141 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179630 33141 flags.go:64] FLAG: --logging-format="text" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179635 33141 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179641 33141 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179647 33141 flags.go:64] FLAG: --manifest-url="" Mar 08 03:31:26.184875 master-0 kubenswrapper[33141]: I0308 03:31:26.179652 33141 flags.go:64] FLAG: --manifest-url-header="" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179660 33141 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179665 33141 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179677 33141 flags.go:64] FLAG: --max-pods="110" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179682 33141 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179687 33141 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179691 33141 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179697 33141 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179703 33141 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179708 33141 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179713 33141 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179725 33141 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179730 33141 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179735 33141 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179741 33141 flags.go:64] FLAG: --pod-cidr="" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179746 33141 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179755 33141 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179760 33141 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179766 33141 flags.go:64] FLAG: --pods-per-core="0" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179772 33141 flags.go:64] FLAG: --port="10250" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179777 33141 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179784 33141 flags.go:64] FLAG: --provider-id="" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179788 33141 flags.go:64] FLAG: --qos-reserved="" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179794 33141 flags.go:64] FLAG: --read-only-port="10255" Mar 08 03:31:26.185982 master-0 kubenswrapper[33141]: I0308 03:31:26.179799 33141 flags.go:64] FLAG: --register-node="true" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179804 33141 flags.go:64] FLAG: --register-schedulable="true" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179809 33141 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179818 33141 flags.go:64] FLAG: --registry-burst="10" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179824 33141 flags.go:64] FLAG: --registry-qps="5" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179829 33141 flags.go:64] FLAG: --reserved-cpus="" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179834 33141 flags.go:64] FLAG: --reserved-memory="" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179841 33141 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179845 33141 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179850 33141 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179854 33141 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179861 33141 flags.go:64] FLAG: --runonce="false" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179866 33141 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179871 33141 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179875 33141 flags.go:64] FLAG: --seccomp-default="false" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179880 33141 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179885 33141 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179889 33141 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179893 33141 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179898 33141 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179924 33141 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179930 33141 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179935 33141 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179940 33141 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179946 33141 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 03:31:26.187002 master-0 kubenswrapper[33141]: I0308 03:31:26.179951 33141 flags.go:64] FLAG: --system-cgroups="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179956 33141 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179963 33141 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179967 33141 flags.go:64] FLAG: --tls-cert-file="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179971 33141 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179977 33141 flags.go:64] FLAG: --tls-min-version="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179981 33141 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179985 33141 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179989 33141 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179994 33141 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.179998 33141 flags.go:64] FLAG: --v="2" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.180004 33141 flags.go:64] FLAG: --version="false" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.180010 33141 flags.go:64] FLAG: --vmodule="" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.180016 33141 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: I0308 03:31:26.180020 33141 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180126 33141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180132 33141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180136 33141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180142 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180148 33141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180153 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180157 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:31:26.187963 master-0 kubenswrapper[33141]: W0308 03:31:26.180161 33141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180165 33141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180169 33141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180173 33141 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180176 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180180 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180183 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180187 33141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180191 33141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180194 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180198 33141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180201 33141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180205 33141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180209 33141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180213 33141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180216 33141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180220 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180223 33141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180227 33141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180231 33141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180234 33141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:31:26.188800 master-0 kubenswrapper[33141]: W0308 03:31:26.180238 33141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180242 33141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180245 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180249 33141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180253 33141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180256 33141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180264 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180270 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180274 33141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180279 33141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180283 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180287 33141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180290 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180294 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180298 33141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180301 33141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180306 33141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180310 33141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180355 33141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:31:26.190337 master-0 kubenswrapper[33141]: W0308 03:31:26.180361 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180365 33141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180369 33141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180373 33141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180376 33141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180380 33141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180384 33141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180387 33141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180391 33141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180394 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180398 33141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180422 33141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180427 33141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180431 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180436 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180440 33141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180444 33141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180448 33141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180453 33141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180459 33141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:31:26.191413 master-0 kubenswrapper[33141]: W0308 03:31:26.180465 33141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.180469 33141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.180472 33141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.180476 33141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.180479 33141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: I0308 03:31:26.180486 33141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: I0308 03:31:26.187002 33141 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: I0308 03:31:26.187059 33141 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187274 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187298 33141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187310 33141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187323 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187337 33141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187349 33141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187361 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:31:26.192375 master-0 kubenswrapper[33141]: W0308 03:31:26.187372 33141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187385 33141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187398 33141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187409 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187420 33141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187432 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187466 33141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187478 33141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187490 33141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187501 33141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187513 33141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187525 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187537 33141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187548 33141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187560 33141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187571 33141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187583 33141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187595 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187606 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187618 33141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:31:26.204688 master-0 kubenswrapper[33141]: W0308 03:31:26.187629 33141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187641 33141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187657 33141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187677 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187693 33141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187707 33141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187719 33141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187734 33141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187748 33141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187761 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187774 33141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187786 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187798 33141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187815 33141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187827 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187840 33141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187853 33141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187866 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187878 33141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:31:26.210742 master-0 kubenswrapper[33141]: W0308 03:31:26.187890 33141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.187944 33141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.187961 33141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.187977 33141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.187992 33141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188005 33141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188017 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188032 33141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188046 33141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188058 33141 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188070 33141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188082 33141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188093 33141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188107 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188119 33141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188131 33141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188143 33141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188155 33141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188167 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188178 33141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:31:26.213056 master-0 kubenswrapper[33141]: W0308 03:31:26.188191 33141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188202 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188213 33141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188225 33141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188237 33141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188248 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: I0308 03:31:26.188268 33141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188585 33141 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188607 33141 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188621 33141 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188636 33141 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188647 33141 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188659 33141 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188671 33141 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188682 33141 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188697 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 03:31:26.213838 master-0 kubenswrapper[33141]: W0308 03:31:26.188709 33141 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188720 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188731 33141 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188741 33141 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188755 33141 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188765 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188779 33141 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188796 33141 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188808 33141 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188821 33141 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188833 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188847 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188859 33141 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188871 33141 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188883 33141 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188894 33141 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188947 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188960 33141 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188970 33141 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 03:31:26.214458 master-0 kubenswrapper[33141]: W0308 03:31:26.188981 33141 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.188993 33141 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189007 33141 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189021 33141 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189036 33141 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189049 33141 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189064 33141 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189080 33141 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189093 33141 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189104 33141 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189116 33141 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189131 33141 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189143 33141 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189155 33141 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189167 33141 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189178 33141 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189190 33141 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189201 33141 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189213 33141 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 03:31:26.215142 master-0 kubenswrapper[33141]: W0308 03:31:26.189224 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189236 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189247 33141 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189259 33141 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189270 33141 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189282 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189294 33141 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189305 33141 feature_gate.go:330] unrecognized feature gate: Example Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189317 33141 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189329 33141 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189341 33141 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189353 33141 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189364 33141 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189377 33141 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189389 33141 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189401 33141 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189413 33141 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189424 33141 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189436 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189450 33141 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 03:31:26.215829 master-0 kubenswrapper[33141]: W0308 03:31:26.189465 33141 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: W0308 03:31:26.189477 33141 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: W0308 03:31:26.189488 33141 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: W0308 03:31:26.189499 33141 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: W0308 03:31:26.189512 33141 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.189528 33141 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.189871 33141 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194184 33141 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194261 33141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194458 33141 server.go:997] "Starting client certificate rotation" Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194468 33141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194658 33141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 21:40:54.340341823 +0000 UTC Mar 08 03:31:26.216571 master-0 kubenswrapper[33141]: I0308 03:31:26.194805 33141 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h9m28.145541473s for next certificate rotation Mar 08 03:31:26.217026 master-0 kubenswrapper[33141]: I0308 03:31:26.195039 33141 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:31:26.217026 master-0 kubenswrapper[33141]: I0308 03:31:26.196277 33141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 03:31:26.217026 master-0 kubenswrapper[33141]: I0308 03:31:26.204509 33141 log.go:25] "Validated CRI v1 runtime API" Mar 08 03:31:26.217026 master-0 kubenswrapper[33141]: I0308 03:31:26.210699 33141 log.go:25] "Validated CRI v1 image API" Mar 08 03:31:26.217026 master-0 kubenswrapper[33141]: I0308 03:31:26.212126 33141 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 03:31:26.224643 master-0 kubenswrapper[33141]: I0308 03:31:26.224574 33141 fs.go:135] Filesystem UUIDs: map[0b52d2da-0de4-4c5d-93b4-a42985f64420:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 08 03:31:26.226224 master-0 kubenswrapper[33141]: I0308 03:31:26.224631 33141 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d/userdata/shm major:0 minor:1109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924/userdata/shm major:0 minor:521 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1389ca3c0a68c688490c2796e3b27e9ac02672c5ceeb0cb3aade38fd422867f7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1389ca3c0a68c688490c2796e3b27e9ac02672c5ceeb0cb3aade38fd422867f7/userdata/shm major:0 minor:849 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2/userdata/shm major:0 minor:1034 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm major:0 minor:279 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a7085411bd9650b06b777535c32a51b5f0829889be0498544a2a5320ab65b31/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a7085411bd9650b06b777535c32a51b5f0829889be0498544a2a5320ab65b31/userdata/shm major:0 minor:68 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260/userdata/shm major:0 minor:416 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b/userdata/shm major:0 minor:628 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/296c48bf2ce9de06a78dcb57c1cdbe34ecc220f6b65f5aa0b90cfb68a9d30264/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/296c48bf2ce9de06a78dcb57c1cdbe34ecc220f6b65f5aa0b90cfb68a9d30264/userdata/shm major:0 minor:790 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2bd783cbda23be7989b39c47de53b6fd58c76ea7fdfdcd9d506ba6bc622ba3e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2bd783cbda23be7989b39c47de53b6fd58c76ea7fdfdcd9d506ba6bc622ba3e3/userdata/shm major:0 minor:879 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae/userdata/shm major:0 minor:1029 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e/userdata/shm major:0 minor:802 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907/userdata/shm major:0 minor:461 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/343f5202f680e6489744b1829ff30f9c82b78fc022fbaf1325e4c8fa7cfe17d8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/343f5202f680e6489744b1829ff30f9c82b78fc022fbaf1325e4c8fa7cfe17d8/userdata/shm major:0 minor:951 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526/userdata/shm major:0 minor:542 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595/userdata/shm major:0 minor:555 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459/userdata/shm major:0 minor:632 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803/userdata/shm major:0 minor:882 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e/userdata/shm major:0 minor:831 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68/userdata/shm major:0 minor:69 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5f8a5dd7ddb9e30727d036901155a403a90563b27d3748f6e9c804013b40f108/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5f8a5dd7ddb9e30727d036901155a403a90563b27d3748f6e9c804013b40f108/userdata/shm major:0 minor:565 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5ffe2f08a61a9faac98a304d7e3f26296109a1c759116e58c683819c7d929612/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5ffe2f08a61a9faac98a304d7e3f26296109a1c759116e58c683819c7d929612/userdata/shm major:0 minor:634 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63df01fd9ed048d9f095f5eeea9d96eeca7e15c41770d9375fbe4be8cc706183/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63df01fd9ed048d9f095f5eeea9d96eeca7e15c41770d9375fbe4be8cc706183/userdata/shm major:0 minor:729 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/708fff129dc113f73aa37f475b4ae4bc5c5913ac215686fbff11aa81a810bb5e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/708fff129dc113f73aa37f475b4ae4bc5c5913ac215686fbff11aa81a810bb5e/userdata/shm major:0 minor:89 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740/userdata/shm major:0 minor:1147 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326/userdata/shm major:0 minor:76 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20/userdata/shm major:0 minor:655 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7edd93db0d8a06f729ecca24b4b7c8fc7864a838f800dec0e7d8fc63c8370d81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7edd93db0d8a06f729ecca24b4b7c8fc7864a838f800dec0e7d8fc63c8370d81/userdata/shm major:0 minor:480 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b/userdata/shm major:0 minor:1039 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd/userdata/shm major:0 minor:812 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1/userdata/shm major:0 minor:759 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8763acbe8455fad4530b6a292ec3d641368771a0e2662a77415028cd12a34859/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8763acbe8455fad4530b6a292ec3d641368771a0e2662a77415028cd12a34859/userdata/shm major:0 minor:1169 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3/userdata/shm major:0 minor:430 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/901d5d72687a570475c0c1ccb8e78c8e542036296238b7606d96a86beb5c35c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/901d5d72687a570475c0c1ccb8e78c8e542036296238b7606d96a86beb5c35c7/userdata/shm major:0 minor:631 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467/userdata/shm major:0 minor:907 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713/userdata/shm major:0 minor:1007 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9cf19296313ccb0a9f49159a002819b23609566806a638c368fc850d3dc27bd2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9cf19296313ccb0a9f49159a002819b23609566806a638c368fc850d3dc27bd2/userdata/shm major:0 minor:1049 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5/userdata/shm major:0 minor:57 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a71f01482badfd599ecfabb1babd6c7d23f18015321cbb4541d2c57b236ce1e9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a71f01482badfd599ecfabb1babd6c7d23f18015321cbb4541d2c57b236ce1e9/userdata/shm major:0 minor:389 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/adabf6ff71c6a21ac7dd07e118092057910e34a7816affdbe09eba458256dabb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/adabf6ff71c6a21ac7dd07e118092057910e34a7816affdbe09eba458256dabb/userdata/shm major:0 minor:318 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed/userdata/shm major:0 minor:564 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm major:0 minor:228 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205/userdata/shm major:0 minor:559 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8/userdata/shm major:0 minor:688 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553/userdata/shm major:0 minor:1092 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663/userdata/shm major:0 minor:866 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm major:0 minor:277 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5a4db52edd426e8cea689535b3e9c7e16767678dd5ad98d256870c1726c756c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5a4db52edd426e8cea689535b3e9c7e16767678dd5ad98d256870c1726c756c/userdata/shm major:0 minor:998 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c955986a722d7c797742e1c5d2eda34143fb5f9b3ba2a0f15453a1ce4e4cb127/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c955986a722d7c797742e1c5d2eda34143fb5f9b3ba2a0f15453a1ce4e4cb127/userdata/shm major:0 minor:629 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd205a040d032b191e7f07df4a3f791df390b5a5d5098d634b2bcb3100b4a7bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd205a040d032b191e7f07df4a3f791df390b5a5d5098d634b2bcb3100b4a7bb/userdata/shm major:0 minor:804 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e/userdata/shm major:0 minor:387 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5/userdata/shm major:0 minor:627 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912/userdata/shm major:0 minor:431 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f30b40b5dee25f4cfef68deaa81953cc276010f2fb26052242518f7b573301d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f30b40b5dee25f4cfef68deaa81953cc276010f2fb26052242518f7b573301d1/userdata/shm major:0 minor:939 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714/userdata/shm major:0 minor:909 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312/userdata/shm major:0 minor:456 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7b4207e156e5bf2edc3fece9e2843a82ae15105a8e6a5ed4d557ebec8b1b2e1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7b4207e156e5bf2edc3fece9e2843a82ae15105a8e6a5ed4d557ebec8b1b2e1/userdata/shm major:0 minor:376 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38/userdata/shm major:0 minor:911 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982/userdata/shm major:0 minor:443 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg:{mountpoint:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774/volumes/kubernetes.io~projected/kube-api-access-w2ng6:{mountpoint:/var/lib/kubelet/pods/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774/volumes/kubernetes.io~projected/kube-api-access-w2ng6 major:0 minor:333 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg:{mountpoint:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:384 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:409 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~projected/kube-api-access-w8cgc:{mountpoint:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~projected/kube-api-access-w8cgc major:0 minor:1002 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:999 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j:{mountpoint:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~secret/metrics-tls major:0 minor:450 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4:{mountpoint:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~projected/kube-api-access-ppbl6:{mountpoint:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~projected/kube-api-access-ppbl6 major:0 minor:1105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1103 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp:{mountpoint:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~projected/kube-api-access-zmdmd:{mountpoint:/var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~projected/kube-api-access-zmdmd major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~secret/serving-cert major:0 minor:800 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~projected/kube-api-access-g28tv:{mountpoint:/var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~projected/kube-api-access-g28tv major:0 minor:323 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:1051 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl:{mountpoint:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~projected/kube-api-access-f42fg:{mountpoint:/var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~projected/kube-api-access-f42fg major:0 minor:797 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~secret/cert major:0 minor:1089 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32a3f04f-05ea-4ee3-ac77-da375c39d104/volumes/kubernetes.io~projected/kube-api-access-fxjkw:{mountpoint:/var/lib/kubelet/pods/32a3f04f-05ea-4ee3-ac77-da375c39d104/volumes/kubernetes.io~projected/kube-api-access-fxjkw major:0 minor:401 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~projected/kube-api-access-f4gcw:{mountpoint:/var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~projected/kube-api-access-f4gcw major:0 minor:322 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:1037 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/ca-certs major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/kube-api-access-vr9bw:{mountpoint:/var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/kube-api-access-vr9bw major:0 minor:561 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~projected/kube-api-access-h4gf5:{mountpoint:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~projected/kube-api-access-h4gf5 major:0 minor:554 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/encryption-config major:0 minor:551 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/etcd-client major:0 minor:552 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/serving-cert major:0 minor:553 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3c336192-80ee-4d53-a4ec-710cba95fac6/volumes/kubernetes.io~projected/kube-api-access-6tj8l:{mountpoint:/var/lib/kubelet/pods/3c336192-80ee-4d53-a4ec-710cba95fac6/volumes/kubernetes.io~projected/kube-api-access-6tj8l major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng:{mountpoint:/var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~projected/kube-api-access-bkckt:{mountpoint:/var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~projected/kube-api-access-bkckt major:0 minor:881 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~secret/proxy-tls major:0 minor:880 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~projected/kube-api-access-knc57:{mountpoint:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~projected/kube-api-access-knc57 major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cert major:0 minor:782 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:344 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7:{mountpoint:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7 major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j:{mountpoint:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j major:0 minor:148 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz:{mountpoint:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~secret/srv-cert major:0 minor:609 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:446 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/tmp major:0 minor:441 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~projected/kube-api-access-7p4tj:{mountpoint:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~projected/kube-api-access-7p4tj major:0 minor:447 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~projected/kube-api-access-snwdh:{mountpoint:/var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~projected/kube-api-access-snwdh major:0 minor:904 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~secret/cert major:0 minor:985 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425:{mountpoint:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425 major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/ca-certs major:0 minor:558 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/kube-api-access-c72dm:{mountpoint:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/kube-api-access-c72dm major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:557 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~projected/kube-api-access-fkp89:{mountpoint:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~projected/kube-api-access-fkp89 major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:703 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/webhook-cert major:0 minor:704 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~projected/kube-api-access-sm9tk:{mountpoint:/var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~projected/kube-api-access-sm9tk major:0 minor:386 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~secret/signing-key major:0 minor:385 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9:{mountpoint:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:610 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~projected/kube-api-access-4rtt8:{mountpoint:/var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~projected/kube-api-access-4rtt8 major:0 minor:865 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:864 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~projected/kube-api-access-cxhht:{mountpoint:/var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~projected/kube-api-access-cxhht major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~secret/proxy-tls major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82ee54a2-5967-4da7-940c-5200d7df098d/volumes/kubernetes.io~projected/kube-api-access-ttwx8:{mountpoint:/var/lib/kubelet/pods/82ee54a2-5967-4da7-940c-5200d7df098d/volumes/kubernetes.io~projected/kube-api-access-ttwx8 major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc/volumes/kubernetes.io~secret/tls-certificates major:0 minor:899 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv:{mountpoint:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~projected/kube-api-access-5fw25:{mountpoint:/var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~projected/kube-api-access-5fw25 major:0 minor:841 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:1120 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt:{mountpoint:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~projected/kube-api-access-mjzs5:{mountpoint:/var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~projected/kube-api-access-mjzs5 major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:798 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~projected/kube-api-access-tfdpq:{mountpoint:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~projected/kube-api-access-tfdpq major:0 minor:938 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/certs major:0 minor:929 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:930 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~projected/kube-api-access-p8l6s:{mountpoint:/var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~projected/kube-api-access-p8l6s major:0 minor:523 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~secret/metrics-tls major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.i Mar 08 03:31:26.226606 master-0 kubenswrapper[33141]: o~projected/kube-api-access-hl7m5:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~projected/kube-api-access-hl7m5 major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9fb588a9-6240-4513-8e4b-248eb43d3f06/volumes/kubernetes.io~projected/kube-api-access-5d8xq:{mountpoint:/var/lib/kubelet/pods/9fb588a9-6240-4513-8e4b-248eb43d3f06/volumes/kubernetes.io~projected/kube-api-access-5d8xq major:0 minor:370 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~projected/kube-api-access-c8krg:{mountpoint:/var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~projected/kube-api-access-c8krg major:0 minor:1146 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~secret/serving-cert major:0 minor:1141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8:{mountpoint:/var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8 major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9:{mountpoint:/var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~projected/kube-api-access-ctdbq:{mountpoint:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~projected/kube-api-access-ctdbq major:0 minor:949 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:945 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:830 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~projected/kube-api-access-d4t2j:{mountpoint:/var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~projected/kube-api-access-d4t2j major:0 minor:113 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:950 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs:{mountpoint:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~projected/kube-api-access-hz7l8:{mountpoint:/var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~projected/kube-api-access-hz7l8 major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~secret/serving-cert major:0 minor:751 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~projected/kube-api-access-22zrr:{mountpoint:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~projected/kube-api-access-22zrr major:0 minor:1006 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1003 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1018 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~projected/kube-api-access-nzgg5:{mountpoint:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~projected/kube-api-access-nzgg5 major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1004 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c474b370-c291-4662-b57c-a20f77931c1b/volumes/kubernetes.io~projected/kube-api-access-xhc2q:{mountpoint:/var/lib/kubelet/pods/c474b370-c291-4662-b57c-a20f77931c1b/volumes/kubernetes.io~projected/kube-api-access-xhc2q major:0 minor:906 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb major:0 minor:263 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~projected/kube-api-access-2mbg2:{mountpoint:/var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~projected/kube-api-access-2mbg2 major:0 minor:112 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~projected/kube-api-access major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~secret/serving-cert major:0 minor:723 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7:{mountpoint:/var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2:{mountpoint:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2 major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~secret/srv-cert major:0 minor:615 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5:{mountpoint:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5 major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:449 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~projected/kube-api-access-t29sr:{mountpoint:/var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~projected/kube-api-access-t29sr major:0 minor:1175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6716923-7f46-438f-9cc4-c0f071ca5b1a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e6716923-7f46-438f-9cc4-c0f071ca5b1a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:399 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~projected/kube-api-access-qqrn6:{mountpoint:/var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~projected/kube-api-access-qqrn6 major:0 minor:863 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:852 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~projected/kube-api-access-kxcml:{mountpoint:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~projected/kube-api-access-kxcml major:0 minor:905 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/default-certificate major:0 minor:903 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/metrics-certs major:0 minor:897 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/stats-auth major:0 minor:898 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea474cd1-8693-4505-9d6f-863d78776d11/volumes/kubernetes.io~projected/kube-api-access-2r6wb:{mountpoint:/var/lib/kubelet/pods/ea474cd1-8693-4505-9d6f-863d78776d11/volumes/kubernetes.io~projected/kube-api-access-2r6wb major:0 minor:74 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4:{mountpoint:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4 major:0 minor:262 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv:{mountpoint:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~secret/metrics-tls major:0 minor:451 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/efd90b06-2733-4086-8d70-b9aed3f7c5fa/volumes/kubernetes.io~projected/kube-api-access-w5qkq:{mountpoint:/var/lib/kubelet/pods/efd90b06-2733-4086-8d70-b9aed3f7c5fa/volumes/kubernetes.io~projected/kube-api-access-w5qkq major:0 minor:81 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~projected/kube-api-access-c9vkx:{mountpoint:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~projected/kube-api-access-c9vkx major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/encryption-config major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/etcd-client major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/serving-cert major:0 minor:478 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f520fbf8-9403-46bc-9381-226a3a1ed1c7/volumes/kubernetes.io~projected/kube-api-access-hrq96:{mountpoint:/var/lib/kubelet/pods/f520fbf8-9403-46bc-9381-226a3a1ed1c7/volumes/kubernetes.io~projected/kube-api-access-hrq96 major:0 minor:424 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj:{mountpoint:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:614 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p:{mountpoint:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:612 fsType:tmpfs blockSize:0} overlay_0-1009:{mountpoint:/var/lib/containers/storage/overlay/9edea836748d4f7c518aa174792d5b16e7be1435fdd273350a1b6105caa31d5a/merged major:0 minor:1009 fsType:overlay blockSize:0} overlay_0-1011:{mountpoint:/var/lib/containers/storage/overlay/0d60cfc32bdd8dbe20c4be47e7ad28bd1901125159a661ed8e8d70f32c7687c6/merged major:0 minor:1011 fsType:overlay blockSize:0} overlay_0-1019:{mountpoint:/var/lib/containers/storage/overlay/bca916075368cde4cf18688780917a5be19a60a0f8e8f98ad0fbd3aa9e09c644/merged major:0 minor:1019 fsType:overlay blockSize:0} overlay_0-1021:{mountpoint:/var/lib/containers/storage/overlay/c9ef05077179051d79db68ac9db5df8c7ce5aff4ab460374a9d70fcb78ac1c60/merged major:0 minor:1021 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/f178be201449ea7e6a0d52a9a867324323b12344d0215a67554a68cc943282a6/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1031:{mountpoint:/var/lib/containers/storage/overlay/c67cbf5624b261bd6a2aef7b6505e72c96e1f9db54cf24682aa2949abf2892b9/merged major:0 minor:1031 fsType:overlay blockSize:0} overlay_0-1041:{mountpoint:/var/lib/containers/storage/overlay/19b0b02a80fae6c040808d4692648b6a523a14eae2f46fa918302cc1c97eba7f/merged major:0 minor:1041 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/baf93efe6828c420e8413a029ee5726bf6efb7dfe45679f5f5bc56cb820c51cc/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1045:{mountpoint:/var/lib/containers/storage/overlay/568acf441b3d058346f003c88bc0264846f68514a9adf0d436ffc9d9151877e9/merged major:0 minor:1045 fsType:overlay blockSize:0} overlay_0-1047:{mountpoint:/var/lib/containers/storage/overlay/3036baa1beed8e577f94bc2f0eb92ee9402319a268f1d5e900188b059ba6776f/merged major:0 minor:1047 fsType:overlay blockSize:0} overlay_0-1052:{mountpoint:/var/lib/containers/storage/overlay/dfe6743ef0a904ec69e87bb1edee9b68f0e74325942d3e37971743b8beeebb22/merged major:0 minor:1052 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/ed4ceb0bf7ee197bbe517f84763840276d5d3458c0de9236cb4c125c0aa08877/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/4807f71f968cbad195053d1eb7a534e69ccd5544a167323f72b29ebe52fe94ff/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1068:{mountpoint:/var/lib/containers/storage/overlay/8f3749fdbdea3f72029f3f7208680e240fe43f5fded2459f39875d1ead0efc77/merged major:0 minor:1068 fsType:overlay blockSize:0} overlay_0-1074:{mountpoint:/var/lib/containers/storage/overlay/69b428e9a372999e4db5a4cb6c75e7b35bea81c00e743bc3f0a70bc3694bf99a/merged major:0 minor:1074 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/be7fa603d636332edd28689f5619b59c0ce29d653c23713c662a0284cc1b3672/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/0c93bff9d754feac7c57776004eeb2722564dcbf719db9e18b3e6ae3b178380c/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/e9959a62e4cce6c1f7b6a256d807195744e7e6d75547112f66cdbfa6985512b7/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/562830e6bbb089f9841beb9d8c4bf59bc0098c57629b34127990b81aa0ed311d/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1096:{mountpoint:/var/lib/containers/storage/overlay/34c4f2a2f2c856c48fd3f78b56ee510b33375a91cde663b31b451ef5f00bb4a4/merged major:0 minor:1096 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/64e741437a938b8dde0692e97e97d5be86f1c586d4fb4ee6a89bc7c34fa8efcc/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1106:{mountpoint:/var/lib/containers/storage/overlay/e293280fa7bb7367328ea76986cd098c11720e5d89cf61232cfe8efd1e4f2e1b/merged major:0 minor:1106 fsType:overlay blockSize:0} overlay_0-1108:{mountpoint:/var/lib/containers/storage/overlay/28fce1f0187a2b8e11bc5a2c5dd2fb7f98f5e927b7afee5d10dae1b48cd71aa5/merged major:0 minor:1108 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/73887f4e4c3ee40eeae1e080786701b3b29ca6e3e89fc125d2d5a19553243456/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1149:{mountpoint:/var/lib/containers/storage/overlay/9bbc80133a0bda8ee3d85cc29b865cf8ae368633e55642da3a94daa102bdb5a9/merged major:0 minor:1149 fsType:overlay blockSize:0} overlay_0-1151:{mountpoint:/var/lib/containers/storage/overlay/cb62f0b961c95539b902af1c2c1c4de169e9c8e9d6d83b4d7a52d5adc7e32ab2/merged major:0 minor:1151 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/9e8a37843b53028b3e2c52c0d6d61b1f1ae808dafdc0a835241f3f9ffd231fb9/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1177:{mountpoint:/var/lib/containers/storage/overlay/777ee1e8d2cf51b2aa53e21a138334a96f8d92afb3514aa0e36b7a4c8c88eec3/merged major:0 minor:1177 fsType:overlay blockSize:0} overlay_0-1178:{mountpoint:/var/lib/containers/storage/overlay/fc20fc6a4f2cdb0b51418c146d589b91a8455598b45c8a6b633a2999c2b48837/merged major:0 minor:1178 fsType:overlay blockSize:0} overlay_0-1181:{mountpoint:/var/lib/containers/storage/overlay/7aaddd158b7fbe51940711041622a42647a24a6a502e138210e4fe27d55f33e1/merged major:0 minor:1181 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/09592d1e24a6d95d2603666f65f7ea884b31db65ebe836ad8dc6a9e3cdbe985a/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/193af17e293f991c31e24667bf74a7f95ee71b9ae4526e9b23cbf46e51da0a7a/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/6ce88a6a4cf530f52e40d3c5b1c408b3703aec6836fc4d095b37b68d5f41dfda/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/a1df295eba3f2844ec53a3966350b20c7526a33c983ba41bcc1e800a75a41fd9/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/c723ebc7f449989671d4fb7c855ce202ff344510a1599542347e5d08f891f77c/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/25002767fb608f673b61c99a596c81c0d0e7c1e443a841b95932ba4a854e4754/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/79d51a26cc268951514171ee03ef11b8e4e2d08b73bdcfe9ffb9c0507a5042ef/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/3129fa3917071cd5799505c6a9fe408e333cde5e680a620bb2331594a404cb7d/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/80f4d29cee450cddd3a0ce6d6b046d7ad1348f00326842781307bd20b7485aad/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/e23bb6ac73a13f7773eb8112d2e8e6b2861a27ab1089adedb1b145bd25ad49fa/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/8216aad78018d6f6f2a4db020f36e48c0ea717a873bb018ede1652175e271b12/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/6bda461609f3cfca07dc7907433f66f40104884351f5b5a09041c38c9beff9ce/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/cc95ac09c8597c96850b9012ee0b895964857d58aa8318737d1dbca06d63fe71/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/339e31e71bda0061d0e64ca3ed96354c248b898e77ee1b278ce9d0c2ad4f05d8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/9ccb602c6fe0d50e59b6a08ac881b19df29109eb0779de0695638de3eaae3daa/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/3e8ac4325be8c86eb950c2b31367c33c3228f589ab4d3a7a066bb6c0b0502eb0/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/4cb9afad51fcb61d47a6d46f4d5a818051fb386063ddb84f2a39e3a3e934144a/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/8eaf8d12e9599ecaf1bd96a658238d730a2974926b2ad73c980633a98d3765e6/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/03be4afa51585932e1ab53893eb10145b02dafe8c3cc898b5ec9e4846681edea/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/e050a7a014ef4ed147535ab3f61a1dfbeeb3bed37b63d56f6b2cee3db35d4522/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/64b9936ba91ca2bd4a32da5b28044bb5cbf5688ae42c63e8f5c908265dacf1a2/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/60973adef19c3dcb04fe88167ae33b3ba90d46fa31906ceac1a125aca96c749e/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/c6a49a0fa7068016c30f8433830253f65661642f1757bcddd471f717dfe6a9c3/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/f43358c3ea802de8961fd751683d6c0c7c1d845206e1717783deec507896b189/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/2b21278f9f61a867b9792d0d016094c580d0a3c87c36764d44b80409afe9d23c/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/038410bec10be52e15eac956c33b38c568e629809aef0d7cb3f31e5b0c31cee3/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/648396e8555649c7ad6f25332b118ada5744d6d1ec2059288eb6a5f3e387b50b/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/74b5c0311ece6873cd203ed01e069264c4535e89b881063b6d78f2ada3daead4/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/27c7d091d7e55a9e1329c33f7e0bf0d7e26248519a1242ab90d2892999942a7a/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/d2adb0ccd23d27959db73dfe908f90bafd3dc2956d88d39a3c8554a4a7cf48fb/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/ce752a33c7df6cf1c040116eba442ee8d20260696d4bf082a9a65aa1c3c1d649/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/3d08ee46e095b20dc52fada7d1f8c4d0d0414e15c097d5920cea3b9b7b7042e8/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/c657200ddc3c2da68fe849baef17ddb232fe145085e7e240f06ba26df4ab918d/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/797f2c52ce0acdf42af668e694ea4bf49d4c9fdeb1d3978345cdadd58a30cfc0/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/f2349cb35cfb725bf225d40609943907c904e50a8b722e4b4b9d1c2382e601e3/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-308:{mountpoint:/var/lib/containers/storage/overlay/5c01df453102dccdd83f21f1978bbeb85e003d47437c5c0db1edbd9699208432/merged major:0 minor:308 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/9379d4334e3734b75e1d539ee644cfabf7342c51cc5fc60ecfffef91f6c401a1/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/17b3ebd6fa099e4ec7f0f345d11cbab6e5ae568e32f2c300c964c76934de581b/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/36d3b7f1442b896d0e71f9c883cb0711f56cd7cddbbc852e62aaffedc442dc36/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/7fd055588099e8494dc76648aa75dc4b2410316bf38eb2ec9c5067f1a7bf28d8/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/5bbaa419ea75eccfb782590ec68dc68024c59d185e804d001fbf072376301c08/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/c497b40d56fb07f4f254098f47badb50195e89110b13f1aa4bb3c93274f97bb7/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-337:{mountpoint:/var/lib/containers/storage/overlay/143a15e6d181603b070164422c1ddaf152b1368606fc610ca18c163b50b8432a/merged major:0 minor:337 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/c01fc4b13b1c9db5fb49d120b95e1543af4825f8eef986838be3f50b030d2681/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/a87ea8da9ad4df990c6f294b8080c2aff36e1d64b6d6abd2639671818d1f15ab/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/521f59084ba8ef1cb04f9785a925236bafe3afd889c7ab118c5d6b0b82a45d24/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/3946dd7a2affb1224caec63bc711f8941466a66e63672ab44afbf0377f2f1c5e/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/714db684751b28c71ca70d5ac5bd546560a788176a9a12dd6bc135882eec7c96/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/8ee7a12147c5f58cb72bde6253557c3d7e595f8c9cd4203047561b0fa3c9d495/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/e3724fde87c978a630beafe4c67448dfbdb1e1e28c3093a0d13886788ba85952/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/4e7faa661e31000fde527715a2766fb4ae0bf8eb831d21282dda64856c3b9e34/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/aeb97fa836a6646a9824eaa30b17de714680703338710a25e765b6573394b49b/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/2824120020f942534e754327cc6b9d276ad775306fa1c84cc404868b64ab1729/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/7e2011e13cc51b3464715bdaf571d88b9fa3374bb3db42c14230005e1c6fe563/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/5939352eff14aa966c5d88611a55db32a1f7e8bf44a4f708ddcef8f6877b425d/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/ca9e2c0778f900be783f5f2ec880b98d631a6d72a218f8d5a6965ae47f8d3c76/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/058808b782b9260ee7dbe2aa68e082057545eb71a76cbea69ce167990847033b/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/8bb3b380461b38e6d39eb9fe9b350751f4cc933e78fe01eb78eaa278d86d1f20/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/1833e1f77375614352bede7eee98bc76d6fb9435729ac43e7640c2ba6f7599a6/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/ba8de53ff52f6bbd80239d6690bbbdbf7477dddb6083c210672aeaccf59a893f/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/dca19655978d359bcf9c48e8a05c70965a6598ef7e43612c88c5fbaeec5baf3d/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/f182d434c672be843859053d599a463a869fd17d4cb50f7eda2cdc152e2dfde6/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/4be34b5d0dd738ee09477fd6491dd2ee7e2f41a587ca6fbfbd6b1650af6c2d01/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/e2529c0a7627198a44ff3d1b441ee0e3229055b9b07bb34a616591740e563e41/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-445:{mountpoint:/var/lib/containers/storage/overlay/911f5ff41e365f0fa9ec5ce350cece103be9ee2cded502d05151d243b4307097/merged major:0 minor:445 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/58e864270c5c8969bbf20081b0dd0009d5439ecb14db9041b51c479751cd51d7/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/9f21e6e4b519977850fd57944df92816ccd66f7069e2845f2ea426956eb5b4da/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/0ec80a992291417896f51554e4588534031241ee64285ea11d62ba68f6358f36/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/40f2f0362a91a676904c81815040944aff0fed6bcfb75d2bfe61a7feb85225a4/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-466:{mountpoint:/var/lib/containers/storage/overlay/d8bb2ab2e137496de6451736de05e39958683d56afea24a670121b360017e704/merged major:0 minor:466 fsType:overlay blockSize:0} overlay_0-472:{mountpoint:/var/lib/containers/storage/overlay/6a51da6c8203b81b5768c8e025077d18fd3ad6701e6a627a37cffdcac01723e8/merged major:0 minor:472 fsType:overlay blockSize:0} overlay_0-474:{mountpoint:/var/lib/containers/storage/overlay/c16a00692c27e03c3c5fafcf909d6ab959605df466de18b687550dd8e086a74c/merged major:0 minor:474 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/a22173d3ac26e649fa288a34711290acf5dd5626da3f20e50c5a41329a3f3774/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-487:{mountpoint:/var/lib/containers/storage/overlay/2dff365090e2f64d522171f83c7bd479010d77e2370bfc53058ab8587b172c0e/merged major:0 minor:487 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/d62efa5ac95974d16044c2e6764b5c3f39059a3c5d27ff27e8a0b28aaecd11b5/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-491:{mountpoint:/var/lib/containers/storage/overlay/2fc43fd24438f0d238c671c98463b765f78eb55fb89a2064a05f4ead1568746f/merged major:0 minor:491 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/ebf29a33ab906bc6b622b3ef1c298bbe30c00a979cc8d67b20e80c4c17080a51/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/c4f610adb4913bc61a5f1ba58317e1ba4e3196070e173de99f5772ce1185f78e/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-505:{mountpoint:/var/lib/containers/storage/overlay/d2e30459a2da1a943f2140034a56ac91f3f9c42c346adb5a16ceab8db1cb48bb/merged major:0 minor:505 fsType:overlay blockSize:0} overlay_0-507:{mountpoint:/var/lib/containers/storage/overlay/bb4ef6386fbec736573462d8898c3807b68ff959b93ccc744e2740b6526c7324/merged major:0 minor:507 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/7c0109989687d25086f2e4674a26df17e36a79f9938e755208b11ef3840cefa7/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/1f31f4edb1f7ad1e70f0d3e4343a3565fdc78141314e0c84751898f33bb288d6/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-530:{mountpoint:/var/lib/containers/storage/overlay/151b98c9c547fe6ff77433d240d1f5927914f4966d992de5b49ba85db5110fe9/merged major:0 minor:530 fsType:overlay blockSize:0} overlay_0-533:{mountpoint:/var/lib/containers/storage/overlay/815e569d59d96a306d481eb65f895a534767be28bd6f0e71bc7c0ef0e7c7c0ff/merged major:0 minor:533 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/0de78689bc02305f81d43bd93518125b331cf7dec14769b8aa701f55c51f6ac8/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-540:{mountpoint:/var/lib/containers/storage/overlay/2815b891cbe960e551f1311d604535d0dde3686da48de898c965c0087fc6fc1a/merged major:0 minor:540 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/a66d52cb14e58956798c97cdfc08006fc2543f03498d27817907387a0e826b4b/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/1f8f9c3644f03e0f6560b8d86fd64d5b4bf120dde597d3238a9ba013d96ad59f/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-570:{mountpoint:/var/lib/containers/storage/overlay/dadf3cfcc16ed0a421b77cc0ee75eda481204b220454053fef9700d2c627679e/merged major:0 minor:570 fsType:overlay blockSize:0} overlay_0-572:{mountpoint:/var/lib/containers/storage/overlay/21c3f3c56dd8ed4e083839205d96ed7983920db5add51c6de0ca0d1bb4a91e9f/merged major:0 minor:572 fsType:overlay blockSize:0} overlay_0-574:{mountpoint:/var/lib/containers/storage/overlay/84776eb05d9bb62d2a075be2178544cd1f0e05aa4f901dc5a02dcf33819dbe7e/merged major:0 minor:574 fsType:overlay blockSize:0} overlay_0-578:{mountpoint:/var/lib/containers/storage/overlay/49cc8deedf7e616320b3ead6cc1e0e8f49d384a70f7b9a0629cabf1ecd55b5b8/merged major:0 minor:578 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/9ef4c60ca14576e13dac5644a864868e1a97fc297485a9eb0403354d424b0f21/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/e7f1fff3715e5d3b62b43f35c4f9f23bef87044666c08a2c82a1766dad963683/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-591:{mountpoint:/var/lib/containers/storage/overlay/f9c4b6074f4260ea6dc851dc0d9e7a3f35d4e9d04d8df91e0695475dc53d95cb/merged major:0 minor:591 fsType:overlay blockSize:0} overlay_0-599:{mountpoint:/var/lib/containers/storage/overlay/9ce11248f7bab94eff372ea14977b954df6817a48d06601419ae081ccaf02b37/merged major:0 minor:599 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/980cf72e8991e54d032bd4e72f2eb5b01c8469f8b61188e4332fe901337de13b/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-604:{mountpoint:/var/lib/containers/storage/overlay/244cc300bba8503df9658881212bf6014856486599f0bd4f69021d1eeb6533de/merged major:0 minor:604 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/ab2b190d8b54eeb3937dde1dec0d2a71b894c8aef3fe5fce1f4fd65582c97590/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-616:{mountpoint:/var/lib/containers/storage/overlay/fdcc7809150030b5a3262b264dc31b4aab48034de6c7cd7ada9f405180b21f85/merged major:0 minor:616 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/05158cf62256c0d1cbe071b1efe6d7d77690e3fdb5f0266c51543544aad0790c/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/1972699f8697278e18430039993232bba48def4ae75d9ed4cca78de191f64fac/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/85164a3d8b2bbe456f625ee99509ce6c0c18bbd0a28960a98c93515f0da46961/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-642:{mountpoint:/var/lib/containers/storage/overlay/1290d6dbfad7c2bfeeeaae46473a44a77e6a54b5a9c761aec2c5ab2dfe8e3edf/merged major:0 minor:642 fsType:overlay blockSize:0} overlay_0-643:{mountpoint:/var/lib/containers/storage/overlay/beb8f773b2bc6ad5a5dd9c204e673902936573482c5eff1b69d692252fe4ac47/merged major:0 minor:643 fsType:overlay blockSize:0} overlay_0-645:{mountpoint:/var/lib/containers/storage/overlay/1d094083a8c04b5e20d9a0160aa3324381564ffc8e50685828ea4c9bbe864efa/merged major:0 minor:645 fsType:overlay blockSize:0} overlay_0-647:{mountpoint:/var/lib/containers/storage/overlay/33514c64ea0245c075d39b29813f9370e08286011283d858d32cb428066bfc96/merged major:0 minor:647 fsType:overlay blockSize:0} overlay_0-649:{mountpoint:/var/lib/containers/storage/overlay/6d0b14d17cb21b283d89635532e111869b9e6459c080e6fcb482c389c023d378/merged major:0 minor:649 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/aa51bc09217ece69661130aa83ad47eee6e7cc2c3796fd1ccb05dd6937cbc8f5/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-653:{mountpoint:/var/lib/containers/storage/overlay/ea724dcf003cd730b1daba077e078be86349808d6f5c8e7b9a660a3264da7195/merged major:0 minor:653 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/83fc57771d07f610734e8c293dc6eb26ee8ee827846468eda27943d5e4236e3b/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-661:{mountpoint:/var/lib/containers/storage/overlay/31085aa132a9db5009b6e725187dbd7def471988f1e2b487913f5b2f6682979e/merged major:0 minor:661 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/11d64a0554f6af8bf4102c3f05a6700545ece749c8d062d6782c5d82531492e3/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/131e90a1687ea51bb6e2ec0c0742063c213778fd6b211011f0a8c555754b48f9/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/b31665cf7ea4a3758aab5cc0e26832d847f7750392e85181f833e1e55733f73a/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/1622ec42da30872a714b5bb449c06617afb4fbe2caeb5296307511830eca9741/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-683:{mountpoint:/var/lib/containers/storage/overlay/8233da26cafab55bbb3626a1e77896d606d7b0b8a6dcfb003c7ff15a4c2b670f/merged major:0 minor:683 fsType:overlay blockSize:0} overlay_0-685:{mountpoint:/var/lib/containers/storage/overlay/5c90f36553e2d190fde38077afa5137b2cad097ded19d2bd240190af1194ba44/merged major:0 minor:685 fsType:overlay blockSize:0} overlay_0-689:{mountpoint:/var/lib/containers/storage/overlay/3e178e742944e50d57eeeea017496187bf389aff9c5eed1a7f4d4a410be782a4/merged major:0 minor:689 fsType:overlay blockSize:0} overlay_0-690:{mountpoint:/var/lib/containers/storage/overlay/5f947c759acec7b0ef582036f7ed6265d4a288129b75670fccd436f972809878/merged major:0 minor:690 fsType:overlay blockSize:0} overlay_0-702:{mountpoint:/var/lib/containers/storage/overlay/910f7b0a49caa4e442cc34b0a4e3dbbef2d0371e7d50317121628af57f7f9c48/merged major:0 minor:702 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/559081d1d2a986ea60fc27abe0a6bc1e044c132fc34584545a9a400b963980f2/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/eeb722afc4b2358ae6d646a8e83fe4fece83af98b74c4005586531cb2083cd1e/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-731:{mountpoint:/var/lib/containers/storage/overlay/309cad78ad99c6400192914a10eae9388d786a0567310a425f6cc5f7a7d7cd72/merged major:0 minor:731 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/37e4a1aac7f7602421fde3bea082231bafc89a73701df764ff66a01b17370504/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-742:{mountpoint:/var/lib/containers/storage/overlay/4cb0b4c94505b0639d47a52e071880a0394aa615c212b1b311b56ae24128aad0/merged major:0 minor:742 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/09b450fa04f9520424214bc364e4d55f6ee27a30460c3d8598fb5f31f18cf1e9/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-746:{mountpoint:/var/lib/containers/storage/overlay/bd89c7aef115c1140f653a13e95f70745a05a246ab6c4361916bc0c917698419/merged major:0 minor:746 fsType:overlay blockSize:0} overlay_0-754:{mountpoint:/var/lib/containers/storage/overlay/4d3f763a13b5d390e0099d9b97acbc88f30fb52c41b9b83820f65deb674cebde/merged major:0 minor:754 fsType:overlay blockSize:0} overlay_0-755:{mountpoint:/var/lib/containers/storage/overlay/75f07aa483476eb7f0da7d34439030d43256eb2fba053fd638c303c2e9701a83/merged major:0 minor:755 fsType:overlay blockSize:0} overlay_0-761:{mountpoint:/var/lib/containers/storage/overlay/0365c42da49d9cef30a9510711ca3d6f148f61672c3fb63c627fa3c2432f85a3/merged major:0 minor:761 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/fa14ba127225ddf8da133e8319f7175851527ea5a470e9ab3aaeabd5acacc07b/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/7e6d767f22acbbc9acc0ec15993c5cdb1bbf3b0023d05ec4ab156aaff64920df/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/f08b6608ea212f5b986cb5c69baa55de746e42902c98d96457d749fd2b0dbe32/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-792:{mountpoint:/var/lib/containers/storage/overlay/104370ab8f459c455bcbd9af1771a2c31173e7d18993310b18b34614b859b426/merged major:0 minor:792 fsType:overlay blockSize:0} overlay_0-794:{mountpoint:/var/lib/containers/storage/overlay/5a03f11e13cf4af1c1a4dc2cce811abb57e8f209d798af829a32557a04697066/merged major:0 minor:794 fsType:overlay blockSize:0} overlay_0-808:{mountpoint:/var/lib/containers/storage/overlay/c3f7a60bcd58ac0d461b7fa27e4a2df6cc52b46602cc2539e19b587cee068658/merged major:0 minor:808 fsType:overlay blockSize:0} overlay_0-810:{mountpoint:/var/lib/containers/storage/overlay/52240ffad30a3d6d475577454767a082c96f1cdc44218d580a7b2aae40cbf330/merged major:0 minor:810 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/8c45b0f45db444c077361d77ba2940dac28acf61429b70080facbd9d4fa1e006/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/83e4a42082c8cebcfe572322aa1915059722ca2eca33bcf57b7b1126c723e9a1/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/0ab4dc9d13a936b369fd11e84ec53f01d55472dbcecc05856d1704c9e612eae5/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-828:{mountpoint:/var/lib/containers/storage/overlay/6e0c4380730fa94d2f6d479375be35a67dc697149b52fe6fb069a4587e924744/merged major:0 minor:828 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/0feb5f0d4d6c27148629487a75304eaf35641402911a4f8c550db337ad2b1e5e/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-845:{mountpoint:/var/lib/containers/storage/overlay/e4d4aef2919fb2ae93bcc315c64b57941a9bd5ec0479ecdf8203882e51068832/merged major:0 minor:845 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/d5f3d94c6af482441da08a7cc68c005eab512581ed479e28e5ea231989404b41/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-870:{mountpoint:/var/lib/containers/storage/overlay/32bc3ef473c41d4b3e2d64ceeb7e3e4e8faef7be1e4c7fc326522d7525a2347c/merged major:0 minor:870 fsType:overlay blockSize:0} overlay_0-872:{mountpoint:/var/lib/containers/storage/overlay/83838f84a87aadabf3cbec4d8f238e83f59f3a45c3febc12af48bfdf50d586f9/merged major:0 minor:872 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/c3f8902942fdcaa435133f993108ce862e396dfe7c94d015ade10befd5cac6f7/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/3ec26241205b4c4ce10881b7597de23f8deca2ae9d5dd1be5848c59df70ba900/merged major:0 minor:887 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/5973b7b88164a1894f026068848c33a342b5e12826288fc1056f5a693788fdf8/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-913:{mountpoint:/var/lib/containers/storage/overlay/acfe228f34424e549770d7cdb5c191e90c7495d9f1d200f3b2e8e998d5494ffa/merge Mar 08 03:31:26.227027 master-0 kubenswrapper[33141]: d major:0 minor:913 fsType:overlay blockSize:0} overlay_0-915:{mountpoint:/var/lib/containers/storage/overlay/9eb62c0c728c92f7073f0178697af3b1b221900b0512884a5ad0e5dd51d95b67/merged major:0 minor:915 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/56941d3e06ba6d7e518e046428e284853eef20818ebd51b9bf7ed7a58507f136/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/121950fae89ef8311a6caca28913ccda220a15787a0fed2d4da6a86ce727ae4d/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/729ea1bf9cc8539cea2e35e537d7cb524a0f88f2861cd35f9d629764568247af/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-927:{mountpoint:/var/lib/containers/storage/overlay/3f8a1bda04066754035f933d3845a2f518df3b9a49c977573fe3a4dd221a48dd/merged major:0 minor:927 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/5b45217ccc1dc1c73b347c41f0a83375fc813372a43cb7cae7a5a8762941c9d8/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-941:{mountpoint:/var/lib/containers/storage/overlay/b3e6457243bfff2e2b1ed17c20567a60fd2bb4740dc2436307fccf9f588eeb54/merged major:0 minor:941 fsType:overlay blockSize:0} overlay_0-943:{mountpoint:/var/lib/containers/storage/overlay/c472be2b8f06d47d63fc33c58a316cd35efdf4887d1cf892cebabc9892c477ce/merged major:0 minor:943 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/8e8c1a95b7d33cacbd61c702ae349cadd6beaaa786a8797a88653de876136952/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-953:{mountpoint:/var/lib/containers/storage/overlay/9690f5d3a1fece659be2f3fce595716d1bbf510b49491390300f16ae02f80a18/merged major:0 minor:953 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/d07c58aa692e05efa36498f52dbb9eedabd0a620de5cf0fa4c864f1bc01c6d8c/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-961:{mountpoint:/var/lib/containers/storage/overlay/aba98dac5a18750beb7b8400c9245aeaf5d3f7e40c9e22d5504b564710e2cfde/merged major:0 minor:961 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/b84231bb6162e65ba613dd909642c477e6b8f3a0ec5ad294c3f61ac6505f88ef/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/d304a059942ea3407753b7128aa15565ac1f8871b3dab4f7ae4815500b122168/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/3d107c44b5d461ed62e2d057675e5b53d95a6a4769d87c08f79d038b6a3fb070/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/b047246160ea61a99241a5c5e245646fd44789293a69229f93019da86da397ef/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-972:{mountpoint:/var/lib/containers/storage/overlay/5e40e30c7da4f3b012e5da81eecbed31d528e26ba53a7b016162463100e601dd/merged major:0 minor:972 fsType:overlay blockSize:0} overlay_0-977:{mountpoint:/var/lib/containers/storage/overlay/824698f79b0a17932fec108794867f4853532cd6f70a0a4ec19fe85c54c7061a/merged major:0 minor:977 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/d518f4c67fd88be04db78b20d6e46ef490e05ad3d3e7c46ea4b314f45fb013ae/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/5e6b57fcfd2fa8a5f0513617bb7f293a12137e23b4cdfea0b315d681f2ce4ba7/merged major:0 minor:992 fsType:overlay blockSize:0}] Mar 08 03:31:26.268374 master-0 kubenswrapper[33141]: I0308 03:31:26.266493 33141 manager.go:217] Machine: {Timestamp:2026-03-08 03:31:26.26542768 +0000 UTC m=+0.135320923 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:ca41eca1edff4210bb11657bca9f1e6d SystemUUID:ca41eca1-edff-4210-bb11-657bca9f1e6d BootID:c341f940-4e88-4b9b-a4b4-98442bfad22d Filesystems:[{Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3fb6887992993ed2286a2778f2126c5d98e2f2a673949f835554364dd15f2803/userdata/shm DeviceMajor:0 DeviceMinor:882 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1068 DeviceMajor:0 DeviceMinor:1068 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7ce1d7e36af0a8d1a304742efe774e5b42b51a042e077bc8da8bd1a942eda38/userdata/shm DeviceMajor:0 DeviceMinor:911 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:703 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:985 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1106 DeviceMajor:0 DeviceMinor:1106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2057fa5db1def1b4beab4f6ad7ad5d375b26c00136a93b9850880221e4af077/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/578f97e51f168b1d370b9c59540a7c839458a113d3777e0d88797827b040f10e/userdata/shm DeviceMajor:0 DeviceMinor:831 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8763acbe8455fad4530b6a292ec3d641368771a0e2662a77415028cd12a34859/userdata/shm DeviceMajor:0 DeviceMinor:1169 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:728 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1011 DeviceMajor:0 DeviceMinor:1011 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~projected/kube-api-access-2mbg2 DeviceMajor:0 DeviceMinor:112 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~projected/kube-api-access-bkckt DeviceMajor:0 DeviceMinor:881 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~projected/kube-api-access-kxcml DeviceMajor:0 DeviceMinor:905 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:478 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-487 DeviceMajor:0 DeviceMinor:487 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5b4816a1b0e9863b488619eb67bad29895714d7381b49c1cf6bbbe6c6b403f8/userdata/shm DeviceMajor:0 DeviceMinor:688 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:450 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-472 DeviceMajor:0 DeviceMinor:472 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982/userdata/shm DeviceMajor:0 DeviceMinor:443 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b611cc0d60bde7b49abae1aff82de97336ebe3d15e74f2544de647745e83e553/userdata/shm DeviceMajor:0 DeviceMinor:1092 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~projected/kube-api-access-sstv2 DeviceMajor:0 DeviceMinor:269 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~projected/kube-api-access-c9vkx DeviceMajor:0 DeviceMinor:476 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/296c48bf2ce9de06a78dcb57c1cdbe34ecc220f6b65f5aa0b90cfb68a9d30264/userdata/shm DeviceMajor:0 DeviceMinor:790 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1045 DeviceMajor:0 DeviceMinor:1045 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9fb588a9-6240-4513-8e4b-248eb43d3f06/volumes/kubernetes.io~projected/kube-api-access-5d8xq DeviceMajor:0 DeviceMinor:370 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3656e53b736cafa9b6c056ac5eca5807c9f3942f84ffbe91cd640949d983eff6/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:375 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:611 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1013 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-689 DeviceMajor:0 DeviceMinor:689 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~projected/kube-api-access-bdzj9 DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~projected/kube-api-access-2qvl4 DeviceMajor:0 DeviceMinor:272 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:610 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-754 DeviceMajor:0 DeviceMinor:754 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:830 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/kube-api-access-g4kt5 DeviceMajor:0 DeviceMinor:261 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~projected/kube-api-access-4rtt8 DeviceMajor:0 DeviceMinor:865 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-642 DeviceMajor:0 DeviceMinor:642 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~projected/kube-api-access-h4gf5 DeviceMajor:0 DeviceMinor:554 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:945 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5/userdata/shm DeviceMajor:0 DeviceMinor:57 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2cfaca9fcdc537eb7c408c01daad733c4e6c46861c4477e533321e5ad366b94d/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1000 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:751 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-794 DeviceMajor:0 DeviceMinor:794 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5a1a52b83c9907ea89396038c11ee345fe83157541875e3f7507eab9c4bb205/userdata/shm DeviceMajor:0 DeviceMinor:559 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:899 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-828 DeviceMajor:0 DeviceMinor:828 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d577cf22293cc3efccf6f8d7b5c5def3ac27aeb747212f6643892edfacc4bbc3/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-872 DeviceMajor:0 DeviceMinor:872 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-308 DeviceMajor:0 DeviceMinor:308 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~projected/kube-api-access-wplgs DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:558 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08be87d753f8ff54c42a674e20a358f8fd1197e96c11ac4af2d4563dac916924/userdata/shm DeviceMajor:0 DeviceMinor:521 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1170 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~projected/kube-api-access-vgvcz DeviceMajor:0 DeviceMinor:238 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/888efea2277e570177f0a32dc3869b5a0e7a8f448a8a3f5fd3fa3dbd19d67ef3/userdata/shm DeviceMajor:0 DeviceMinor:430 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-977 DeviceMajor:0 DeviceMinor:977 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-755 DeviceMajor:0 DeviceMinor:755 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae8f3a1e-689b-4107-993a-dde67f4decf2/volumes/kubernetes.io~projected/kube-api-access-ctdbq DeviceMajor:0 DeviceMinor:949 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-572 DeviceMajor:0 DeviceMinor:572 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7318cd3451d32a71b4c756d7048c3d653bc133c447ae6a1c5c593d8efda4718a/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/17b37add10475bc68eb15628021eecebb97b383f212ff9b1f6eec1b7b5ecb93d/userdata/shm DeviceMajor:0 DeviceMinor:279 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/901d5d72687a570475c0c1ccb8e78c8e542036296238b7606d96a86beb5c35c7/userdata/shm DeviceMajor:0 DeviceMinor:631 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:929 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~projected/kube-api-access-2ct9j DeviceMajor:0 DeviceMinor:148 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78bd83c51ec0b72f8c1c51a4e8cc4279f7e9fc2470a6586c4f8e968fc90dd9c1/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~projected/kube-api-access-vnvtg DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6d3624a26cf17ed6d9d863dbd0123f9d75c4ad1fd279b49f51b9d0ec0bcd2e7/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-746 DeviceMajor:0 DeviceMinor:746 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-507 DeviceMajor:0 DeviceMinor:507 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~projected/kube-api-access-knc57 DeviceMajor:0 DeviceMinor:787 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~projected/kube-api-access-sm9tk DeviceMajor:0 DeviceMinor:386 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:800 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:930 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:451 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:553 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/efd90b06-2733-4086-8d70-b9aed3f7c5fa/volumes/kubernetes.io~projected/kube-api-access-w5qkq DeviceMajor:0 DeviceMinor:81 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:465 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-647 DeviceMajor:0 DeviceMinor:647 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-792 DeviceMajor:0 DeviceMinor:792 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-953 DeviceMajor:0 DeviceMinor:953 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1/userdata/shm DeviceMajor:0 DeviceMinor:759 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-761 DeviceMajor:0 DeviceMinor:761 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:344 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/343f5202f680e6489744b1829ff30f9c82b78fc022fbaf1325e4c8fa7cfe17d8/userdata/shm DeviceMajor:0 DeviceMinor:951 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2bd783cbda23be7989b39c47de53b6fd58c76ea7fdfdcd9d506ba6bc622ba3e3/userdata/shm DeviceMajor:0 DeviceMinor:879 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd1bcaff-7dbd-4559-92fc-5453993f643e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d/userdata/shm DeviceMajor:0 DeviceMinor:1109 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/708fff129dc113f73aa37f475b4ae4bc5c5913ac215686fbff11aa81a810bb5e/userdata/shm DeviceMajor:0 DeviceMinor:89 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/197afe92-5912-4e90-a477-e3abe001bbc7/volumes/kubernetes.io~projected/kube-api-access-2kd6j DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9b090750-b893-42fe-8def-dfb3f4253d43/volumes/kubernetes.io~projected/kube-api-access-p8l6s DeviceMajor:0 DeviceMinor:523 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2db78ea27514b302571913d9c4c80a0241da223717474e7c9dd37ca7d04999ae/userdata/shm DeviceMajor:0 DeviceMinor:1029 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:260 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-683 DeviceMajor:0 DeviceMinor:683 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-591 DeviceMajor:0 DeviceMinor:591 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1096 DeviceMajor:0 DeviceMinor:1096 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~projected/kube-api-access-7q68p DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:441 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:612 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ae6734dc9a6a4883d043259eba3b292e17119fb0b35a539821b49660768f326/userdata/shm DeviceMajor:0 DeviceMinor:76 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1021 DeviceMajor:0 DeviceMinor:1021 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-530 DeviceMajor:0 DeviceMinor:530 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1041 DeviceMajor:0 DeviceMinor:1041 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6c2ad8212c197eee7b469f1de5efa66984b471df3e1f03d54b6b5ff8745f2152/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a7085411bd9650b06b777535c32a51b5f0829889be0498544a2a5320ab65b31/userdata/shm DeviceMajor:0 DeviceMinor:68 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f7b4207e156e5bf2edc3fece9e2843a82ae15105a8e6a5ed4d557ebec8b1b2e1/userdata/shm DeviceMajor:0 DeviceMinor:376 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-943 DeviceMajor:0 DeviceMinor:943 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1004 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1009 DeviceMajor:0 DeviceMinor:1009 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~projected/kube-api-access-w8cgc DeviceMajor:0 DeviceMinor:1002 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1098 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes/kubernetes.io~projected/kube-api-access-hz7l8 DeviceMajor:0 DeviceMinor:757 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-702 DeviceMajor:0 DeviceMinor:702 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80f8e0a5b29cf774f05a36f5e54407ef8ecffe58d5e1c71074bcd340ab2217dd/userdata/shm DeviceMajor:0 DeviceMinor:812 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:903 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1108 DeviceMajor:0 DeviceMinor:1108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42b9f2d1-da5c-46b5-b131-d206fa37d436/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:880 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1177 DeviceMajor:0 DeviceMinor:1177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~projected/kube-api-access-6q425 DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-685 DeviceMajor:0 DeviceMinor:685 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f47ce532692381e3555ceaa331dea07e3ba8f75b7ab217af49fad07906bb6714/userdata/shm DeviceMajor:0 DeviceMinor:909 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-445 DeviceMajor:0 DeviceMinor:445 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f520fbf8-9403-46bc-9381-226a3a1ed1c7/volumes/kubernetes.io~projected/kube-api-access-hrq96 DeviceMajor:0 DeviceMinor:424 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-474 DeviceMajor:0 DeviceMinor:474 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d2a53f3b-7e22-47eb-9f28-da3441b3662f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:723 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b7b4beff94637a634e8ef9e4b25f19f962ecdd386d4f992ddeae713d81fd595/userdata/shm DeviceMajor:0 DeviceMinor:555 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5a4db52edd426e8cea689535b3e9c7e16767678dd5ad98d256870c1726c756c/userdata/shm DeviceMajor:0 DeviceMinor:998 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-599 DeviceMajor:0 DeviceMinor:599 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-337 DeviceMajor:0 DeviceMinor:337 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-491 DeviceMajor:0 DeviceMinor:491 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~projected/kube-api-access-d4t2j DeviceMajor:0 DeviceMinor:113 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b486915ec2d9eb73fc4331b88d96e65ac9fd451489c056db54081b15711177b/userdata/shm DeviceMajor:0 DeviceMinor:628 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b27a4cf8670701cc2abed7a5d7cf91c3ac386bb22a1ffb161f3900b04157d20/userdata/shm DeviceMajor:0 DeviceMinor:655 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-616 DeviceMajor:0 DeviceMinor:616 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33abd37edec3b6673abf4565124ec1bb97dfb231042f8c1557bae037c9db586c/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-604 DeviceMajor:0 DeviceMinor:604 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f30b40b5dee25f4cfef68deaa81953cc276010f2fb26052242518f7b573301d1/userdata/shm DeviceMajor:0 DeviceMinor:939 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1151 DeviceMajor:0 DeviceMinor:1151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:798 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~projected/kube-api-access-6xrfv DeviceMajor:0 DeviceMinor:91 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b835d8031dbcbc04b5cf9f5f9326f7df63aa6cc447918f61407dc7395da0cf96/userdata/shm DeviceMajor:0 DeviceMinor:277 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-927 DeviceMajor:0 DeviceMinor:927 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-941 DeviceMajor:0 DeviceMinor:941 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1074 DeviceMajor:0 DeviceMinor:1074 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~projected/kube-api-access-c8krg DeviceMajor:0 DeviceMinor:1146 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-540 DeviceMajor:0 DeviceMinor:540 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a92a557-d023-4531-b3a3-e559af0fe358/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:609 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/15567f529dadb966bb3f2ed3bd55c3bbbb0f335669e907e0d29044fa59e27ca2/userdata/shm DeviceMajor:0 DeviceMinor:1034 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1103 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90d6dd3478d5a96b9991ca2dea6f7e3c092c924b63627e5a5258e2d1cefa9467/userdata/shm DeviceMajor:0 DeviceMinor:907 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1052 DeviceMajor:0 DeviceMinor:1052 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0863a084dab5a5150480ef18603c4be97dcab69eda52c04e9d468c989d32511/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6176b631-3911-41cd-beb6-5bc2e924c3a7/volumes/kubernetes.io~projected/kube-api-access-snwdh DeviceMajor:0 DeviceMinor:904 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1031 DeviceMajor:0 DeviceMinor:1031 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~projected/kube-api-access-njrcj DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3cd41a65358471f5054db74b4750cf6ade61d95a5a85377f17ce5e88dcbed459/userdata/shm DeviceMajor:0 DeviceMinor:632 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4fd323ae-11bf-4207-bdce-4d51a9c19dc3/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:140 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/82ee54a2-5967-4da7-940c-5200d7df098d/volumes/kubernetes.io~projected/kube-api-access-ttwx8 DeviceMajor:0 DeviceMinor:520 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/99923acc-a1b4-4fbc-a636-f9c145856b01/volumes/kubernetes.io~projected/kube-api-access-tfdpq DeviceMajor:0 DeviceMinor:938 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1149 DeviceMajor:0 DeviceMinor:1149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc9a73581d61f23b90565e1504479bb07c7036d273c39ea527a5c5e5f96ad318/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/89fc77c9-b444-4828-8a35-c63ea9335245/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-915 DeviceMajor:0 DeviceMinor:915 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd205a040d032b191e7f07df4a3f791df390b5a5d5098d634b2bcb3100b4a7bb/userdata/shm DeviceMajor:0 DeviceMinor:804 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:1051 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1018 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1003 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-972 DeviceMajor:0 DeviceMinor:972 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6716923-7f46-438f-9cc4-c0f071ca5b1a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:399 Capacity:200003584 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-505 DeviceMajor:0 DeviceMinor:505 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b537a655-ef73-40b5-b228-95ab6cfdedf2/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:950 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1389ca3c0a68c688490c2796e3b27e9ac02672c5ceeb0cb3aade38fd422867f7/userdata/shm DeviceMajor:0 DeviceMinor:849 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:1037 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f21e214cb8d847d79985954284fcf2d5d0fe1c85a034843bd4226982b10fa7b/userdata/shm DeviceMajor:0 DeviceMinor:1039 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-645 DeviceMajor:0 DeviceMinor:645 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e7ddc2cc17107ecc5f5679a895a40a2316543cd8ac3957bbb6fdbfd52f258bbd/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:557 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ef16d7ae-66aa-45d4-b1a6-1327738a46bb/volumes/kubernetes.io~projected/kube-api-access-mgfrv DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5a058138-8039-4841-821b-7ee5bb8648e4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a9eb19952ec20b1658c5d7279dba5a3e819952572f69b34c3995c362fd16f77/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2a506cf6-bc39-4089-9caa-4c14c4d15c11/volumes/kubernetes.io~projected/kube-api-access-7flfl DeviceMajor:0 DeviceMinor:264 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:556 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~projected/kube-api-access-f42fg DeviceMajor:0 DeviceMinor:797 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1232aad5956093753d35685897e21ebb416211a87662dd6ecf51a5d3e9c0b32a/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/89e15db4-c541-4d53-878d-706fa022f970/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16ca7ace-9608-4686-a039-a6ba6e3ab837/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:999 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1fa64f1b-9f10-488b-8f94-1600774062c4/volumes/kubernetes.io~projected/kube-api-access-8k2lp DeviceMajor:0 DeviceMinor:251 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774/volumes/kubernetes.io~projected/kube-api-access-w2ng6 DeviceMajor:0 DeviceMinor:333 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-643 DeviceMajor:0 DeviceMinor:643 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/323b10005e4debbf49965c6c6b8a7d60537ce630469f2e6648f22893122d5907/userdata/shm DeviceMajor:0 DeviceMinor:461 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f6ee6202-11e5-4586-ae46-075da1ad7f1a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:614 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~projected/kube-api-access-ppbl6 DeviceMajor:0 DeviceMinor:1105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1d446527-f3fd-4a37-a980-7445031928d1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~projected/kube-api-access-7p4tj DeviceMajor:0 DeviceMinor:447 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63df01fd9ed048d9f095f5eeea9d96eeca7e15c41770d9375fbe4be8cc706183/userdata/shm DeviceMajor:0 DeviceMinor:729 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:t Mar 08 03:31:26.269059 master-0 kubenswrapper[33141]: rue} {Device:/var/lib/kubelet/pods/aadf7b67-db33-4392-81f5-1b93eef54545/volumes/kubernetes.io~projected/kube-api-access-n4vq9 DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-574 DeviceMajor:0 DeviceMinor:574 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-649 DeviceMajor:0 DeviceMinor:649 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-661 DeviceMajor:0 DeviceMinor:661 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b66b70c78dec2cc9fda46d55ae86f4ac9d3a2e620b251090c661d75cafe17663/userdata/shm DeviceMajor:0 DeviceMinor:866 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0722d9c3-77b8-4770-9171-d4aeba4b0cc7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b34330ab0e38ca065ff7c208891466fd5dc198028c2433e196ee9914284d260/userdata/shm DeviceMajor:0 DeviceMinor:416 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-870 DeviceMajor:0 DeviceMinor:870 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/399c5025-da66-4c52-8e68-ea6c996d9cc8/volumes/kubernetes.io~projected/kube-api-access-vr9bw DeviceMajor:0 DeviceMinor:561 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/45212ce7-5f95-402e-93c4-83bac844f77d/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:782 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/beed862c-6283-4568-aa2e-f49b31e30a3b/volumes/kubernetes.io~projected/kube-api-access-22zrr DeviceMajor:0 DeviceMinor:1006 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/631b3a8e-43e0-4818-b6e1-bd61ac531ab6/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-742 DeviceMajor:0 DeviceMinor:742 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:806 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-810 DeviceMajor:0 DeviceMinor:810 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:897 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1178 DeviceMajor:0 DeviceMinor:1178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-845 DeviceMajor:0 DeviceMinor:845 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3c336192-80ee-4d53-a4ec-710cba95fac6/volumes/kubernetes.io~projected/kube-api-access-6tj8l DeviceMajor:0 DeviceMinor:380 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1047 DeviceMajor:0 DeviceMinor:1047 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ed56c17f-7e15-4776-80a6-3ef091307e89/volumes/kubernetes.io~projected/kube-api-access-4kxn4 DeviceMajor:0 DeviceMinor:262 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1104 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a55bef81-2381-4036-b171-3dbc77e9c25d/volumes/kubernetes.io~projected/kube-api-access-hj7h8 DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:449 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-731 DeviceMajor:0 DeviceMinor:731 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1181 DeviceMajor:0 DeviceMinor:1181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/volumes/kubernetes.io~projected/kube-api-access-c72dm DeviceMajor:0 DeviceMinor:563 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740/userdata/shm DeviceMajor:0 DeviceMinor:1147 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9d40fba7-84f0-46d7-9b49-dbba7aab20c5/volumes/kubernetes.io~projected/kube-api-access-hl7m5 DeviceMajor:0 DeviceMinor:127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-466 DeviceMajor:0 DeviceMinor:466 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:829 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a71f01482badfd599ecfabb1babd6c7d23f18015321cbb4541d2c57b236ce1e9/userdata/shm DeviceMajor:0 DeviceMinor:389 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:384 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5d29f16f-e26f-4b9d-a646-230316e936a8/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:446 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:898 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d69f101-60a8-41fd-bcda-4eb654c626a2/volumes/kubernetes.io~projected/kube-api-access-8gnng DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5f8a5dd7ddb9e30727d036901155a403a90563b27d3748f6e9c804013b40f108/userdata/shm DeviceMajor:0 DeviceMinor:565 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5ffe2f08a61a9faac98a304d7e3f26296109a1c759116e58c683819c7d929612/userdata/shm DeviceMajor:0 DeviceMinor:634 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-533 DeviceMajor:0 DeviceMinor:533 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5a6cee35f22c780870380f137c7c7ac5cad4e9bf1cc3de7531cd3267c12f312/userdata/shm DeviceMajor:0 DeviceMinor:456 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1f92e19e760a85c21780cc29101c92446f01b76f5fa8e09729c263a935894ed/userdata/shm DeviceMajor:0 DeviceMinor:564 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d68278f6-59d5-4bbf-b969-e47635ffd4cc/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:615 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:1120 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68/userdata/shm DeviceMajor:0 DeviceMinor:69 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7af634f0-65ac-402a-acd6-a8aad11b37ab/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:385 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-690 DeviceMajor:0 DeviceMinor:690 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:704 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da13ebe4bb39b539d69ddd6f98c92aef7a368cb8e590b47b5129b0e84f51f727/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-913 DeviceMajor:0 DeviceMinor:913 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/975b4d0b44381f65f95d81f848a4362b6807994f0beac99be40baae93513b5d6/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:551 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-570 DeviceMajor:0 DeviceMinor:570 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2057f75-159d-4416-a234-050f0fe1afc9/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:438 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7fafb070-7914-41c2-a8b2-e609a0e5bf9f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:864 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:852 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ffe00fd-6834-4a5b-8b0b-b467d284f23c/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1089 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d5eee869-c27f-4534-bbce-d954c42b36a3/volumes/kubernetes.io~projected/kube-api-access-l2tk7 DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/361223b8a35fa2e488a299fb5b083b6bc9563230c5745f5243422471a4cde526/userdata/shm DeviceMajor:0 DeviceMinor:542 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-653 DeviceMajor:0 DeviceMinor:653 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/38287d1a-b784-4ce9-9650-949d92469519/volumes/kubernetes.io~projected/kube-api-access-f4gcw DeviceMajor:0 DeviceMinor:322 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4711e21f-da6d-47ee-8722-64663e05de10/volumes/kubernetes.io~projected/kube-api-access-ms6s7 DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d82cf0db-0891-482d-856b-1675843042dd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1/volumes/kubernetes.io~projected/kube-api-access-g28tv DeviceMajor:0 DeviceMinor:323 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8c65557b-9566-49f1-a049-fe492ca201b5/volumes/kubernetes.io~projected/kube-api-access-5fw25 DeviceMajor:0 DeviceMinor:841 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/volumes/kubernetes.io~projected/kube-api-access-t29sr DeviceMajor:0 DeviceMinor:1175 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32cd08c82c3a9782e49f0aedb6e9aa5133016a2e1a1a498bd5a24df1a9fb1acd/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/volumes/kubernetes.io~projected/kube-api-access-89prb DeviceMajor:0 DeviceMinor:263 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/adabf6ff71c6a21ac7dd07e118092057910e34a7816affdbe09eba458256dabb/userdata/shm DeviceMajor:0 DeviceMinor:318 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:1141 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/32a3f04f-05ea-4ee3-ac77-da375c39d104/volumes/kubernetes.io~projected/kube-api-access-fxjkw DeviceMajor:0 DeviceMinor:401 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31218dcdf0ecf9df2bd5ef8038d35cfb3eccf97f3c92277ac22d33217175df8e/userdata/shm DeviceMajor:0 DeviceMinor:802 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:409 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/81abc17a-8a51-44e2-a5df-5ddb394a9fa6/volumes/kubernetes.io~projected/kube-api-access-cxhht DeviceMajor:0 DeviceMinor:807 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-808 DeviceMajor:0 DeviceMinor:808 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/volumes/kubernetes.io~projected/kube-api-access-qqrn6 DeviceMajor:0 DeviceMinor:863 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/90ef7c0a-7c6f-45aa-865d-1e247110b265/volumes/kubernetes.io~projected/kube-api-access-ttqvt DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7edd93db0d8a06f729ecca24b4b7c8fc7864a838f800dec0e7d8fc63c8370d81/userdata/shm DeviceMajor:0 DeviceMinor:480 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/965f8eef-c5af-499b-b1db-cf63072781cc/volumes/kubernetes.io~projected/kube-api-access-mjzs5 DeviceMajor:0 DeviceMinor:799 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c474b370-c291-4662-b57c-a20f77931c1b/volumes/kubernetes.io~projected/kube-api-access-xhc2q DeviceMajor:0 DeviceMinor:906 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ea474cd1-8693-4505-9d6f-863d78776d11/volumes/kubernetes.io~projected/kube-api-access-2r6wb DeviceMajor:0 DeviceMinor:74 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f061dbce14702bf613c2afa174a972bae2bb5e74063744b88de9bb9b512fc912/userdata/shm DeviceMajor:0 DeviceMinor:431 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-578 DeviceMajor:0 DeviceMinor:578 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9cf19296313ccb0a9f49159a002819b23609566806a638c368fc850d3dc27bd2/userdata/shm DeviceMajor:0 DeviceMinor:1049 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e677a54e6724884557ae20d247d9a84e80a29107af56ad730c6c9a95dbebf9a5/userdata/shm DeviceMajor:0 DeviceMinor:627 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1019 DeviceMajor:0 DeviceMinor:1019 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b47ec93978468330f5b6fd9911611a54c62310997396935ab30d9d7feb5533c5/userdata/shm DeviceMajor:0 DeviceMinor:228 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d159152a376a0a7f2611797aef08a7b7f0428f856929aff15f4081f4e7f23f1e/userdata/shm DeviceMajor:0 DeviceMinor:387 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:552 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c955986a722d7c797742e1c5d2eda34143fb5f9b3ba2a0f15453a1ce4e4cb127/userdata/shm DeviceMajor:0 DeviceMinor:629 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2728b91e-d59a-4e85-b245-0f297e9377f9/volumes/kubernetes.io~projected/kube-api-access-zmdmd DeviceMajor:0 DeviceMinor:801 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-961 DeviceMajor:0 DeviceMinor:961 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/103158c5-c99f-4224-bf5a-e23b1aaf9172/volumes/kubernetes.io~projected/kube-api-access-m5pgg DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2468d2a3-ec65-4888-a86a-3f66fa311f56/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:255 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7a1b7b0d-6e00-485e-86e8-7bd047569328/volumes/kubernetes.io~projected/kube-api-access-fkp89 DeviceMajor:0 DeviceMinor:735 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/volumes/kubernetes.io~projected/kube-api-access-nzgg5 DeviceMajor:0 DeviceMinor:1005 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/995e6e9f26bc876fb60a003dcae56035a03e0c1a1cc126a768cf25270214d713/userdata/shm DeviceMajor:0 DeviceMinor:1007 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:005487746ccdf8a MacAddress:4a:18:bf:6a:56:be Speed:10000 Mtu:8900} {Name:08be87d753f8ff5 MacAddress:f2:c2:72:13:3b:e6 Speed:10000 Mtu:8900} {Name:0a9eb19952ec20b MacAddress:a6:c1:f4:5f:da:8e Speed:10000 Mtu:8900} {Name:1232aad59560937 MacAddress:4e:a3:1c:98:39:24 Speed:10000 Mtu:8900} {Name:1389ca3c0a68c68 MacAddress:ea:ae:23:83:b5:b3 Speed:10000 Mtu:8900} {Name:15567f529dadb96 MacAddress:ce:ea:2a:46:34:4c Speed:10000 Mtu:8900} {Name:17b37add10475bc MacAddress:d6:8a:bc:bc:ba:95 Speed:10000 Mtu:8900} {Name:1a7085411bd9650 MacAddress:16:58:e3:45:7f:d1 Speed:10000 Mtu:8900} {Name:1b34330ab0e38ca MacAddress:c2:35:d2:cd:5f:14 Speed:10000 Mtu:8900} {Name:1b486915ec2d9eb MacAddress:4e:22:e7:c4:00:cb Speed:10000 Mtu:8900} {Name:296c48bf2ce9de0 MacAddress:7e:b7:82:25:06:bb Speed:10000 Mtu:8900} {Name:31218dcdf0ecf9d MacAddress:e6:84:4a:59:6d:74 Speed:10000 Mtu:8900} {Name:323b10005e4debb MacAddress:52:21:3b:55:07:28 Speed:10000 Mtu:8900} {Name:32cd08c82c3a978 MacAddress:92:60:aa:d4:d0:55 Speed:10000 Mtu:8900} {Name:33abd37edec3b66 MacAddress:ba:ac:0b:3d:68:f3 Speed:10000 Mtu:8900} {Name:361223b8a35fa2e MacAddress:7e:f5:2b:14:fa:3e Speed:10000 Mtu:8900} {Name:3656e53b736cafa MacAddress:fe:ac:ce:8e:e0:ee Speed:10000 Mtu:8900} {Name:3b7b4beff94637a MacAddress:0a:b1:f8:25:c9:32 Speed:10000 Mtu:8900} {Name:3cd41a65358471f MacAddress:66:3d:ad:d2:02:2e Speed:10000 Mtu:8900} {Name:3fb6887992993ed MacAddress:ea:d1:fb:e8:d7:25 Speed:10000 Mtu:8900} {Name:578f97e51f168b1 MacAddress:3a:2f:24:e5:55:e5 Speed:10000 Mtu:8900} {Name:5f8a5dd7ddb9e30 MacAddress:b6:0e:44:96:3d:6b Speed:10000 Mtu:8900} {Name:5ffe2f08a61a9fa MacAddress:5a:ea:72:08:f3:45 Speed:10000 Mtu:8900} {Name:78bd83c51ec0b72 MacAddress:a2:da:14:5d:9d:c8 Speed:10000 Mtu:8900} {Name:7a6ea17a030d906 MacAddress:7a:72:44:f2:83:53 Speed:10000 Mtu:8900} {Name:7ae6734dc9a6a48 MacAddress:9a:dd:06:16:fe:b8 Speed:10000 Mtu:8900} {Name:7b27a4cf8670701 MacAddress:0a:08:64:6c:d2:f6 Speed:10000 Mtu:8900} {Name:7edd93db0d8a06f MacAddress:ea:93:85:fd:ee:32 Speed:10000 Mtu:8900} {Name:7f21e214cb8d847 MacAddress:22:54:5a:c5:52:e1 Speed:10000 Mtu:8900} {Name:80f8e0a5b29cf77 MacAddress:16:f2:3d:57:e6:3a Speed:10000 Mtu:8900} {Name:846f36ee6a71e88 MacAddress:8e:b2:f7:e5:34:0b Speed:10000 Mtu:8900} {Name:8763acbe8455fad MacAddress:5a:c7:8b:32:a8:e7 Speed:10000 Mtu:8900} {Name:901d5d72687a570 MacAddress:fa:3c:aa:79:a0:25 Speed:10000 Mtu:8900} {Name:90d6dd3478d5a96 MacAddress:d2:f5:1d:a2:78:0f Speed:10000 Mtu:8900} {Name:975b4d0b44381f6 MacAddress:ca:98:6e:41:d9:40 Speed:10000 Mtu:8900} {Name:995e6e9f26bc876 MacAddress:3e:ea:7c:c0:69:e3 Speed:10000 Mtu:8900} {Name:9cf19296313ccb0 MacAddress:3a:70:fa:c7:94:0c Speed:10000 Mtu:8900} {Name:a71f01482badfd5 MacAddress:ba:28:a4:b9:a4:5a Speed:10000 Mtu:8900} {Name:adabf6ff71c6a21 MacAddress:52:34:47:3a:35:78 Speed:10000 Mtu:8900} {Name:b1f92e19e760a85 MacAddress:92:2b:48:52:27:8c Speed:10000 Mtu:8900} {Name:b47ec9397846833 MacAddress:8e:cb:62:f2:3e:e5 Speed:10000 Mtu:8900} {Name:b5a1a52b83c9907 MacAddress:2a:05:8b:77:bf:02 Speed:10000 Mtu:8900} {Name:b5b4816a1b0e986 MacAddress:ae:5e:78:12:87:1b Speed:10000 Mtu:8900} {Name:b611cc0d60bde7b MacAddress:62:20:a1:db:bb:db Speed:10000 Mtu:8900} {Name:b835d8031dbcbc0 MacAddress:22:c3:65:27:a8:09 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:02:98:c1:3d:60:73 Speed:0 Mtu:8900} {Name:c5a4db52edd426e MacAddress:d6:ec:70:14:0f:04 Speed:10000 Mtu:8900} {Name:c6d3624a26cf17e MacAddress:82:97:a8:5b:69:17 Speed:10000 Mtu:8900} {Name:c955986a722d7c7 MacAddress:4a:40:77:f3:c5:a7 Speed:10000 Mtu:8900} {Name:cd205a040d032b1 MacAddress:8e:b8:09:62:ee:eb Speed:10000 Mtu:8900} {Name:d159152a376a0a7 MacAddress:b2:9b:46:10:c8:c3 Speed:10000 Mtu:8900} {Name:d577cf22293cc3e MacAddress:96:96:df:d1:38:3f Speed:10000 Mtu:8900} {Name:e677a54e6724884 MacAddress:96:8d:53:77:74:3e Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:b5:5c:2e Speed:-1 Mtu:9000} {Name:f061dbce14702bf MacAddress:1a:bc:3b:eb:6b:4b Speed:10000 Mtu:8900} {Name:f47ce532692381e MacAddress:02:c4:55:f8:fc:39 Speed:10000 Mtu:8900} {Name:f5a6cee35f22c78 MacAddress:0a:7b:03:db:3a:c6 Speed:10000 Mtu:8900} {Name:f7b4207e156e5bf MacAddress:ce:bd:fd:42:f3:89 Speed:10000 Mtu:8900} {Name:fcc3b92d08a13fa MacAddress:86:98:13:a2:c7:1c Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:da:1c:db:80:ac:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 03:31:26.269059 master-0 kubenswrapper[33141]: I0308 03:31:26.268355 33141 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 03:31:26.269059 master-0 kubenswrapper[33141]: I0308 03:31:26.268445 33141 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 03:31:26.269059 master-0 kubenswrapper[33141]: I0308 03:31:26.268801 33141 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 03:31:26.269059 master-0 kubenswrapper[33141]: I0308 03:31:26.269035 33141 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269077 33141 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269322 33141 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269335 33141 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269346 33141 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269375 33141 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 03:31:26.269495 master-0 kubenswrapper[33141]: I0308 03:31:26.269419 33141 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269561 33141 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269633 33141 kubelet.go:418] "Attempting to sync node with API server" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269656 33141 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269678 33141 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269692 33141 kubelet.go:324] "Adding apiserver pod source" Mar 08 03:31:26.269742 master-0 kubenswrapper[33141]: I0308 03:31:26.269714 33141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 03:31:26.271791 master-0 kubenswrapper[33141]: I0308 03:31:26.271731 33141 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 08 03:31:26.272099 master-0 kubenswrapper[33141]: I0308 03:31:26.272056 33141 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 08 03:31:26.272538 master-0 kubenswrapper[33141]: I0308 03:31:26.272494 33141 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 03:31:26.272731 master-0 kubenswrapper[33141]: I0308 03:31:26.272693 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 03:31:26.272731 master-0 kubenswrapper[33141]: I0308 03:31:26.272731 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272745 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272757 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272779 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272794 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272807 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272819 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272834 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 03:31:26.272844 master-0 kubenswrapper[33141]: I0308 03:31:26.272845 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 03:31:26.273279 master-0 kubenswrapper[33141]: I0308 03:31:26.272862 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 03:31:26.273279 master-0 kubenswrapper[33141]: I0308 03:31:26.272884 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 03:31:26.273279 master-0 kubenswrapper[33141]: I0308 03:31:26.272961 33141 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 03:31:26.273596 master-0 kubenswrapper[33141]: I0308 03:31:26.273555 33141 server.go:1280] "Started kubelet" Mar 08 03:31:26.274990 master-0 kubenswrapper[33141]: I0308 03:31:26.274725 33141 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 03:31:26.275082 master-0 kubenswrapper[33141]: I0308 03:31:26.274894 33141 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 03:31:26.275141 master-0 kubenswrapper[33141]: I0308 03:31:26.275078 33141 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 08 03:31:26.275720 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 08 03:31:26.286650 master-0 kubenswrapper[33141]: I0308 03:31:26.278132 33141 server.go:449] "Adding debug handlers to kubelet server" Mar 08 03:31:26.286650 master-0 kubenswrapper[33141]: I0308 03:31:26.281891 33141 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 03:31:26.292225 master-0 kubenswrapper[33141]: I0308 03:31:26.292179 33141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 03:31:26.292400 master-0 kubenswrapper[33141]: I0308 03:31:26.292265 33141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 03:31:26.293805 master-0 kubenswrapper[33141]: E0308 03:31:26.293747 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.294297 master-0 kubenswrapper[33141]: I0308 03:31:26.294255 33141 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 03:31:26.294297 master-0 kubenswrapper[33141]: I0308 03:31:26.294284 33141 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 03:31:26.294622 master-0 kubenswrapper[33141]: I0308 03:31:26.294544 33141 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 08 03:31:26.295628 master-0 kubenswrapper[33141]: I0308 03:31:26.295556 33141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-09 03:01:08 +0000 UTC, rotation deadline is 2026-03-08 21:56:05.393390512 +0000 UTC Mar 08 03:31:26.295628 master-0 kubenswrapper[33141]: I0308 03:31:26.295618 33141 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h24m39.097777475s for next certificate rotation Mar 08 03:31:26.300858 master-0 kubenswrapper[33141]: I0308 03:31:26.300479 33141 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 03:31:26.300858 master-0 kubenswrapper[33141]: I0308 03:31:26.300514 33141 factory.go:55] Registering systemd factory Mar 08 03:31:26.300858 master-0 kubenswrapper[33141]: I0308 03:31:26.300530 33141 factory.go:221] Registration of the systemd container factory successfully Mar 08 03:31:26.305042 master-0 kubenswrapper[33141]: I0308 03:31:26.301094 33141 factory.go:153] Registering CRI-O factory Mar 08 03:31:26.305042 master-0 kubenswrapper[33141]: I0308 03:31:26.301163 33141 factory.go:221] Registration of the crio container factory successfully Mar 08 03:31:26.305042 master-0 kubenswrapper[33141]: I0308 03:31:26.301208 33141 factory.go:103] Registering Raw factory Mar 08 03:31:26.305042 master-0 kubenswrapper[33141]: I0308 03:31:26.301274 33141 manager.go:1196] Started watching for new ooms in manager Mar 08 03:31:26.305042 master-0 kubenswrapper[33141]: I0308 03:31:26.302542 33141 manager.go:319] Starting recovery of all containers Mar 08 03:31:26.305662 master-0 kubenswrapper[33141]: E0308 03:31:26.305467 33141 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 08 03:31:26.313790 master-0 kubenswrapper[33141]: I0308 03:31:26.313655 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log" seLinuxMountContext="" Mar 08 03:31:26.313790 master-0 kubenswrapper[33141]: I0308 03:31:26.313767 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b9f2d1-da5c-46b5-b131-d206fa37d436" volumeName="kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313799 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313826 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2a53f3b-7e22-47eb-9f28-da3441b3662f" volumeName="kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313855 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed56c17f-7e15-4776-80a6-3ef091307e89" volumeName="kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313879 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efd90b06-2733-4086-8d70-b9aed3f7c5fa" volumeName="kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313940 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.313969 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.314005 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38287d1a-b784-4ce9-9650-949d92469519" volumeName="kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.314073 master-0 kubenswrapper[33141]: I0308 03:31:26.314046 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" volumeName="kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314105 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314136 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314162 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314192 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a1b7b0d-6e00-485e-86e8-7bd047569328" volumeName="kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314217 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314241 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0ee8c53-bf36-4459-a2c2-380293a09e26" volumeName="kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314267 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8f3a1e-689b-4107-993a-dde67f4decf2" volumeName="kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314291 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8f3a1e-689b-4107-993a-dde67f4decf2" volumeName="kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314315 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2728b91e-d59a-4e85-b245-0f297e9377f9" volumeName="kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314342 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314367 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82ee54a2-5967-4da7-940c-5200d7df098d" volumeName="kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314391 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c65557b-9566-49f1-a049-fe492ca201b5" volumeName="kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314415 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beed862c-6283-4568-aa2e-f49b31e30a3b" volumeName="kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314441 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45212ce7-5f95-402e-93c4-83bac844f77d" volumeName="kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314468 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314496 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425" seLinuxMountContext="" Mar 08 03:31:26.314515 master-0 kubenswrapper[33141]: I0308 03:31:26.314525 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" volumeName="kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314551 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314582 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314607 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314637 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314665 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b090750-b893-42fe-8def-dfb3f4253d43" volumeName="kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314689 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314713 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16ca7ace-9608-4686-a039-a6ba6e3ab837" volumeName="kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314741 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" volumeName="kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314765 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314790 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0ee8c53-bf36-4459-a2c2-380293a09e26" volumeName="kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314815 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314838 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314864 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.314890 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315128 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6176b631-3911-41cd-beb6-5bc2e924c3a7" volumeName="kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315177 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" volumeName="kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315279 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99923acc-a1b4-4fbc-a636-f9c145856b01" volumeName="kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315392 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315501 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config" seLinuxMountContext="" Mar 08 03:31:26.315511 master-0 kubenswrapper[33141]: I0308 03:31:26.315535 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82ee54a2-5967-4da7-940c-5200d7df098d" volumeName="kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315566 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315592 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315703 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315737 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a3f04f-05ea-4ee3-ac77-da375c39d104" volumeName="kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315764 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b9f2d1-da5c-46b5-b131-d206fa37d436" volumeName="kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315864 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d29f16f-e26f-4b9d-a646-230316e936a8" volumeName="kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.315962 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fafb070-7914-41c2-a8b2-e609a0e5bf9f" volumeName="kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316061 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316106 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316133 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316160 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316186 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="399c5025-da66-4c52-8e68-ea6c996d9cc8" volumeName="kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316289 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45212ce7-5f95-402e-93c4-83bac844f77d" volumeName="kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316319 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316347 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8f3a1e-689b-4107-993a-dde67f4decf2" volumeName="kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316373 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.316483 master-0 kubenswrapper[33141]: I0308 03:31:26.316477 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2728b91e-d59a-4e85-b245-0f297e9377f9" volumeName="kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316508 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316536 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316564 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b090750-b893-42fe-8def-dfb3f4253d43" volumeName="kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316665 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316699 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316727 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316752 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" volumeName="kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316840 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea474cd1-8693-4505-9d6f-863d78776d11" volumeName="kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316881 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27f5a0ab-3811-4c17-adc1-9ca48ae18ee1" volumeName="kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316938 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38287d1a-b784-4ce9-9650-949d92469519" volumeName="kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.316971 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d29f16f-e26f-4b9d-a646-230316e936a8" volumeName="kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.317057 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.317085 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="965f8eef-c5af-499b-b1db-cf63072781cc" volumeName="kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5" seLinuxMountContext="" Mar 08 03:31:26.317229 master-0 kubenswrapper[33141]: I0308 03:31:26.317110 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317264 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317405 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffe00fd-6834-4a5b-8b0b-b467d284f23c" volumeName="kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317446 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317472 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7af634f0-65ac-402a-acd6-a8aad11b37ab" volumeName="kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317500 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fafb070-7914-41c2-a8b2-e609a0e5bf9f" volumeName="kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317527 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317553 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317577 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317602 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317624 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317649 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6176b631-3911-41cd-beb6-5bc2e924c3a7" volumeName="kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317674 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99923acc-a1b4-4fbc-a636-f9c145856b01" volumeName="kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317701 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9b090750-b893-42fe-8def-dfb3f4253d43" volumeName="kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317727 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef16d7ae-66aa-45d4-b1a6-1327738a46bb" volumeName="kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317754 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317780 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap" seLinuxMountContext="" Mar 08 03:31:26.317812 master-0 kubenswrapper[33141]: I0308 03:31:26.317804 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317829 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317854 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317880 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd53c98b-51cc-498a-ab37-f743a27bdcfb" volumeName="kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317941 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" volumeName="kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317969 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea474cd1-8693-4505-9d6f-863d78776d11" volumeName="kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.317995 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16ca7ace-9608-4686-a039-a6ba6e3ab837" volumeName="kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318018 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" volumeName="kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318044 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89fc77c9-b444-4828-8a35-c63ea9335245" volumeName="kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318067 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318115 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b537a655-ef73-40b5-b228-95ab6cfdedf2" volumeName="kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318146 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318172 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd53c98b-51cc-498a-ab37-f743a27bdcfb" volumeName="kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318201 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318231 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45212ce7-5f95-402e-93c4-83bac844f77d" volumeName="kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318259 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318285 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318315 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf9e0ac-b5a3-4a3e-aa57-31b810f634ef" volumeName="kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318339 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318364 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b537a655-ef73-40b5-b228-95ab6cfdedf2" volumeName="kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318391 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318414 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a1b7b0d-6e00-485e-86e8-7bd047569328" volumeName="kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318437 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318486 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd53c98b-51cc-498a-ab37-f743a27bdcfb" volumeName="kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318510 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beed862c-6283-4568-aa2e-f49b31e30a3b" volumeName="kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318533 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6" volumeName="kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318556 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" volumeName="kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318581 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffe00fd-6834-4a5b-8b0b-b467d284f23c" volumeName="kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318603 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="399c5025-da66-4c52-8e68-ea6c996d9cc8" volumeName="kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318627 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45212ce7-5f95-402e-93c4-83bac844f77d" volumeName="kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318653 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" volumeName="kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318675 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aadf7b67-db33-4392-81f5-1b93eef54545" volumeName="kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318698 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318724 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6ee6202-11e5-4586-ae46-075da1ad7f1a" volumeName="kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318750 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318774 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a3f04f-05ea-4ee3-ac77-da375c39d104" volumeName="kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318800 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2a53f3b-7e22-47eb-9f28-da3441b3662f" volumeName="kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318824 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2728b91e-d59a-4e85-b245-0f297e9377f9" volumeName="kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318847 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7af634f0-65ac-402a-acd6-a8aad11b37ab" volumeName="kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318873 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beed862c-6283-4568-aa2e-f49b31e30a3b" volumeName="kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318895 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6" volumeName="kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318954 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2728b91e-d59a-4e85-b245-0f297e9377f9" volumeName="kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.318979 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45212ce7-5f95-402e-93c4-83bac844f77d" volumeName="kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319003 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc" volumeName="kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319027 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319053 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="99923acc-a1b4-4fbc-a636-f9c145856b01" volumeName="kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319077 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2728b91e-d59a-4e85-b245-0f297e9377f9" volumeName="kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319101 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" volumeName="kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319126 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beed862c-6283-4568-aa2e-f49b31e30a3b" volumeName="kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319151 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" volumeName="kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319175 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319199 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff" volumeName="kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319227 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff" volumeName="kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319250 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e59f2e1-7fbc-43b1-bc81-7ca5f058d774" volumeName="kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319273 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82ee54a2-5967-4da7-940c-5200d7df098d" volumeName="kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319301 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319360 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6ee6202-11e5-4586-ae46-075da1ad7f1a" volumeName="kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319385 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1fa64f1b-9f10-488b-8f94-1600774062c4" volumeName="kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319412 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" volumeName="kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319437 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae8f3a1e-689b-4107-993a-dde67f4decf2" volumeName="kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319461 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff" volumeName="kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319485 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319508 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319530 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" volumeName="kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319554 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" volumeName="kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319578 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90ef7c0a-7c6f-45aa-865d-1e247110b265" volumeName="kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319602 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea474cd1-8693-4505-9d6f-863d78776d11" volumeName="kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319627 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319653 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a1b7b0d-6e00-485e-86e8-7bd047569328" volumeName="kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319677 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efd90b06-2733-4086-8d70-b9aed3f7c5fa" volumeName="kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319702 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319729 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" volumeName="kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319753 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" volumeName="kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319778 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b537a655-ef73-40b5-b228-95ab6cfdedf2" volumeName="kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319802 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319826 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aadf7b67-db33-4392-81f5-1b93eef54545" volumeName="kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319850 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319873 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d69f101-60a8-41fd-bcda-4eb654c626a2" volumeName="kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319897 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319960 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c474b370-c291-4662-b57c-a20f77931c1b" volumeName="kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.319985 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89fc77c9-b444-4828-8a35-c63ea9335245" volumeName="kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320015 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd53c98b-51cc-498a-ab37-f743a27bdcfb" volumeName="kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320040 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d2a53f3b-7e22-47eb-9f28-da3441b3662f" volumeName="kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320065 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6716923-7f46-438f-9cc4-c0f071ca5b1a" volumeName="kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320090 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a1b7b0d-6e00-485e-86e8-7bd047569328" volumeName="kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320114 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="89e15db4-c541-4d53-878d-706fa022f970" volumeName="kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320140 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320162 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320187 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="daf9e0ac-b5a3-4a3e-aa57-31b810f634ef" volumeName="kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320215 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a92a557-d023-4531-b3a3-e559af0fe358" volumeName="kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz" seLinuxMountContext="" Mar 08 03:31:26.320179 master-0 kubenswrapper[33141]: I0308 03:31:26.320294 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16ca7ace-9608-4686-a039-a6ba6e3ab837" volumeName="kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320331 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d446527-f3fd-4a37-a980-7445031928d1" volumeName="kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320355 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320380 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef16d7ae-66aa-45d4-b1a6-1327738a46bb" volumeName="kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320403 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" volumeName="kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320426 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bfc9ae4f-eb67-4ed1-97a1-d67e839fd601" volumeName="kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320448 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" volumeName="kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320465 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0ee8c53-bf36-4459-a2c2-380293a09e26" volumeName="kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320481 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd53c98b-51cc-498a-ab37-f743a27bdcfb" volumeName="kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320526 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed56c17f-7e15-4776-80a6-3ef091307e89" volumeName="kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320541 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd1bcaff-7dbd-4559-92fc-5453993f643e" volumeName="kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320553 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0ee8c53-bf36-4459-a2c2-380293a09e26" volumeName="kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320568 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" volumeName="kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320581 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" volumeName="kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320699 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a55bef81-2381-4036-b171-3dbc77e9c25d" volumeName="kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320956 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7af634f0-65ac-402a-acd6-a8aad11b37ab" volumeName="kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320975 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" volumeName="kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.320990 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2468d2a3-ec65-4888-a86a-3f66fa311f56" volumeName="kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321008 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a506cf6-bc39-4089-9caa-4c14c4d15c11" volumeName="kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321024 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b537a655-ef73-40b5-b228-95ab6cfdedf2" volumeName="kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321040 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2057f75-159d-4416-a234-050f0fe1afc9" volumeName="kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321057 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c65557b-9566-49f1-a049-fe492ca201b5" volumeName="kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321073 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="197afe92-5912-4e90-a477-e3abe001bbc7" volumeName="kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321088 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32a3f04f-05ea-4ee3-ac77-da375c39d104" volumeName="kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321114 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fafb070-7914-41c2-a8b2-e609a0e5bf9f" volumeName="kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321132 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c65557b-9566-49f1-a049-fe492ca201b5" volumeName="kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321151 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5eee869-c27f-4534-bbce-d954c42b36a3" volumeName="kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321171 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d68278f6-59d5-4bbf-b969-e47635ffd4cc" volumeName="kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321190 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" volumeName="kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321210 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="103158c5-c99f-4224-bf5a-e23b1aaf9172" volumeName="kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321227 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="27f5a0ab-3811-4c17-adc1-9ca48ae18ee1" volumeName="kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321243 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a92a557-d023-4531-b3a3-e559af0fe358" volumeName="kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321259 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d29f16f-e26f-4b9d-a646-230316e936a8" volumeName="kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321275 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" volumeName="kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321292 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16ca7ace-9608-4686-a039-a6ba6e3ab837" volumeName="kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321309 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38287d1a-b784-4ce9-9650-949d92469519" volumeName="kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321333 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321351 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="beed862c-6283-4568-aa2e-f49b31e30a3b" volumeName="kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321370 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321386 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9fb588a9-6240-4513-8e4b-248eb43d3f06" volumeName="kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321401 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d68278f6-59d5-4bbf-b969-e47635ffd4cc" volumeName="kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321416 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d82cf0db-0891-482d-856b-1675843042dd" volumeName="kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321441 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff" volumeName="kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321458 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" volumeName="kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321473 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a058138-8039-4841-821b-7ee5bb8648e4" volumeName="kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321490 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="399c5025-da66-4c52-8e68-ea6c996d9cc8" volumeName="kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321504 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" volumeName="kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321524 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="965f8eef-c5af-499b-b1db-cf63072781cc" volumeName="kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321539 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed56c17f-7e15-4776-80a6-3ef091307e89" volumeName="kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321554 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f520fbf8-9403-46bc-9381-226a3a1ed1c7" volumeName="kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321568 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4711e21f-da6d-47ee-8722-64663e05de10" volumeName="kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.321583 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efd90b06-2733-4086-8d70-b9aed3f7c5fa" volumeName="kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323487 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ffe00fd-6834-4a5b-8b0b-b467d284f23c" volumeName="kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323506 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3c336192-80ee-4d53-a4ec-710cba95fac6" volumeName="kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323522 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42b9f2d1-da5c-46b5-b131-d206fa37d436" volumeName="kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323544 33141 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c65557b-9566-49f1-a049-fe492ca201b5" volumeName="kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25" seLinuxMountContext="" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323559 33141 reconstruct.go:97] "Volume reconstruction finished" Mar 08 03:31:26.324031 master-0 kubenswrapper[33141]: I0308 03:31:26.323575 33141 reconciler.go:26] "Reconciler: start to sync state" Mar 08 03:31:26.346014 master-0 kubenswrapper[33141]: I0308 03:31:26.345939 33141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 03:31:26.348937 master-0 kubenswrapper[33141]: I0308 03:31:26.348866 33141 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 03:31:26.348937 master-0 kubenswrapper[33141]: I0308 03:31:26.348918 33141 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 03:31:26.348937 master-0 kubenswrapper[33141]: I0308 03:31:26.348942 33141 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 03:31:26.349088 master-0 kubenswrapper[33141]: E0308 03:31:26.348983 33141 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 08 03:31:26.374243 master-0 kubenswrapper[33141]: I0308 03:31:26.374188 33141 generic.go:334] "Generic (PLEG): container finished" podID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" exitCode=0 Mar 08 03:31:26.384008 master-0 kubenswrapper[33141]: I0308 03:31:26.383769 33141 generic.go:334] "Generic (PLEG): container finished" podID="d2a53f3b-7e22-47eb-9f28-da3441b3662f" containerID="50e75d2b6ff206804802c9331065b3194c6e165af0a4d329ce7b16d5dd4ec36b" exitCode=0 Mar 08 03:31:26.387310 master-0 kubenswrapper[33141]: I0308 03:31:26.387206 33141 generic.go:334] "Generic (PLEG): container finished" podID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerID="fa364304eb5003254684c63c5eb9681efe16b224f31c3dd661492ecd5fa5deda" exitCode=0 Mar 08 03:31:26.389367 master-0 kubenswrapper[33141]: I0308 03:31:26.389309 33141 generic.go:334] "Generic (PLEG): container finished" podID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" containerID="c63ef8e2456c825e658d5f608a85868873e2b693945cba943036d87c971f2472" exitCode=0 Mar 08 03:31:26.391009 master-0 kubenswrapper[33141]: I0308 03:31:26.390967 33141 generic.go:334] "Generic (PLEG): container finished" podID="3c20b192-755d-46cd-ab12-2e823b92222e" containerID="0f14e36a52435c9a7870808befbb0f157c9e7126b2ba8d72d22dd7d795a56f5e" exitCode=0 Mar 08 03:31:26.393275 master-0 kubenswrapper[33141]: I0308 03:31:26.393234 33141 generic.go:334] "Generic (PLEG): container finished" podID="efd90b06-2733-4086-8d70-b9aed3f7c5fa" containerID="1e4f4d94c09667f06d80074811ef12370da17593d72be45cabbce6af91fa585e" exitCode=0 Mar 08 03:31:26.393275 master-0 kubenswrapper[33141]: I0308 03:31:26.393264 33141 generic.go:334] "Generic (PLEG): container finished" podID="efd90b06-2733-4086-8d70-b9aed3f7c5fa" containerID="4cff0cf9994171cd26e2dfc788853d1edc3f7d516e075c54ccc4de66155800df" exitCode=0 Mar 08 03:31:26.394052 master-0 kubenswrapper[33141]: E0308 03:31:26.394009 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.396924 master-0 kubenswrapper[33141]: I0308 03:31:26.396855 33141 generic.go:334] "Generic (PLEG): container finished" podID="c6e4afd0-fbcd-49c7-9132-b54c9c28b74b" containerID="ba71a05bad6a20ee6c802a92e9435b17cd722af277a98de423aa90bee7e17757" exitCode=0 Mar 08 03:31:26.403828 master-0 kubenswrapper[33141]: I0308 03:31:26.403776 33141 generic.go:334] "Generic (PLEG): container finished" podID="5a058138-8039-4841-821b-7ee5bb8648e4" containerID="15751ae441f57c6481deb8b5cc3f72916e46489440f9eb8189b8afd0e24064b8" exitCode=0 Mar 08 03:31:26.413654 master-0 kubenswrapper[33141]: I0308 03:31:26.413052 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/0.log" Mar 08 03:31:26.413654 master-0 kubenswrapper[33141]: I0308 03:31:26.413137 33141 generic.go:334] "Generic (PLEG): container finished" podID="c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6" containerID="26407c3ca61b97ca6a5ab23516c6982614940f72f59b58cd3af72397aa976645" exitCode=1 Mar 08 03:31:26.416424 master-0 kubenswrapper[33141]: I0308 03:31:26.416386 33141 generic.go:334] "Generic (PLEG): container finished" podID="3a2a141d-a4c3-4b6c-a90b-d184f61a14dd" containerID="b02be813c757aa8825e328781683d790be0707b1273d725c9eedbb7404cb32df" exitCode=0 Mar 08 03:31:26.428361 master-0 kubenswrapper[33141]: I0308 03:31:26.428323 33141 generic.go:334] "Generic (PLEG): container finished" podID="631b3a8e-43e0-4818-b6e1-bd61ac531ab6" containerID="3c9001c002bea8ae81641c5d4b6e3f763d09a9b2d453bd324d0fd602cf7b8d18" exitCode=0 Mar 08 03:31:26.431333 master-0 kubenswrapper[33141]: I0308 03:31:26.431270 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:31:26.432027 master-0 kubenswrapper[33141]: I0308 03:31:26.431973 33141 generic.go:334] "Generic (PLEG): container finished" podID="45212ce7-5f95-402e-93c4-83bac844f77d" containerID="1f6f8381deef57a0256fc235c898d15d43f11f73c31fe5017234823e9524bbb3" exitCode=1 Mar 08 03:31:26.444221 master-0 kubenswrapper[33141]: I0308 03:31:26.443446 33141 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="544467ed5f69544193975fd6c79144f61384cc33dfea4931ad4d22fe98a678ac" exitCode=0 Mar 08 03:31:26.444221 master-0 kubenswrapper[33141]: I0308 03:31:26.443476 33141 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="30c975c18b67e45ff1d2f959009eed3f5b14395b49fcf6b6934c0641639a5191" exitCode=0 Mar 08 03:31:26.444221 master-0 kubenswrapper[33141]: I0308 03:31:26.443486 33141 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="ec3ad0a8cb7c4967a852ed5f49ded9e632a837d89e4681c433e054f6efc7dd8c" exitCode=0 Mar 08 03:31:26.445159 master-0 kubenswrapper[33141]: I0308 03:31:26.445105 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:31:26.445159 master-0 kubenswrapper[33141]: I0308 03:31:26.445152 33141 generic.go:334] "Generic (PLEG): container finished" podID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerID="0cb275b613648ba82dd895945a8f72c136f919a1708eb582688a065e13a9ce66" exitCode=1 Mar 08 03:31:26.452236 master-0 kubenswrapper[33141]: E0308 03:31:26.452016 33141 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 03:31:26.461049 master-0 kubenswrapper[33141]: I0308 03:31:26.460996 33141 generic.go:334] "Generic (PLEG): container finished" podID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerID="a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7" exitCode=0 Mar 08 03:31:26.465957 master-0 kubenswrapper[33141]: I0308 03:31:26.465139 33141 generic.go:334] "Generic (PLEG): container finished" podID="7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6" containerID="0c7ee191b0d761ce93be93342e9e3606726dcf3941ed2cb569025a1100bcd65c" exitCode=0 Mar 08 03:31:26.472258 master-0 kubenswrapper[33141]: I0308 03:31:26.472232 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/0.log" Mar 08 03:31:26.473220 master-0 kubenswrapper[33141]: I0308 03:31:26.473185 33141 generic.go:334] "Generic (PLEG): container finished" podID="8c65557b-9566-49f1-a049-fe492ca201b5" containerID="a06749d70fe898a009e67138a8c24210d9e9c5e2f8da6592f0e5a82371873c57" exitCode=255 Mar 08 03:31:26.497125 master-0 kubenswrapper[33141]: E0308 03:31:26.495008 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.497125 master-0 kubenswrapper[33141]: I0308 03:31:26.495661 33141 generic.go:334] "Generic (PLEG): container finished" podID="627f0501-8b6a-4bc7-b610-355a0661f385" containerID="39acd779a6b4efc5eaa5408d29d32ff65cfd712c0fbed2aa3652c2244b17d9bc" exitCode=0 Mar 08 03:31:26.497947 master-0 kubenswrapper[33141]: I0308 03:31:26.497543 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/5.log" Mar 08 03:31:26.497947 master-0 kubenswrapper[33141]: I0308 03:31:26.497577 33141 generic.go:334] "Generic (PLEG): container finished" podID="9fb588a9-6240-4513-8e4b-248eb43d3f06" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" exitCode=1 Mar 08 03:31:26.511137 master-0 kubenswrapper[33141]: I0308 03:31:26.511087 33141 generic.go:334] "Generic (PLEG): container finished" podID="965f8eef-c5af-499b-b1db-cf63072781cc" containerID="148123547b19a17f13384ac0f521efe52ca11a8ba51861fa9546df274d15fce9" exitCode=0 Mar 08 03:31:26.519997 master-0 kubenswrapper[33141]: I0308 03:31:26.519878 33141 generic.go:334] "Generic (PLEG): container finished" podID="89e15db4-c541-4d53-878d-706fa022f970" containerID="00d9ac3c9b6193b454aa568c1a383fab452df49e6573435f6a143be4c2708486" exitCode=0 Mar 08 03:31:26.525795 master-0 kubenswrapper[33141]: I0308 03:31:26.525447 33141 generic.go:334] "Generic (PLEG): container finished" podID="81abc17a-8a51-44e2-a5df-5ddb394a9fa6" containerID="8520a5f64276e58759b21a4f5abc65748412aaf732608a2bdda90bcabbccfe1e" exitCode=0 Mar 08 03:31:26.531437 master-0 kubenswrapper[33141]: I0308 03:31:26.531402 33141 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="3ffe89ef5d1c010872dcc8d98905a0b3c74a65a6e59320222ab4708980d7907c" exitCode=0 Mar 08 03:31:26.531437 master-0 kubenswrapper[33141]: I0308 03:31:26.531423 33141 generic.go:334] "Generic (PLEG): container finished" podID="bd1bcaff-7dbd-4559-92fc-5453993f643e" containerID="9e265e782cf76f9516c413e6f08b3615e452acde7fee6964c9dbc229a25efa6c" exitCode=0 Mar 08 03:31:26.533801 master-0 kubenswrapper[33141]: I0308 03:31:26.533760 33141 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="24027b59dda46d94a7e2a44f624ddff046a8eb2c97a011a50b8c8d2955a5f46d" exitCode=0 Mar 08 03:31:26.533801 master-0 kubenswrapper[33141]: I0308 03:31:26.533795 33141 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="4b47ae711314d73fcc77146d0c62592ca40a700fb32ad8d3e1174722f8823659" exitCode=0 Mar 08 03:31:26.533801 master-0 kubenswrapper[33141]: I0308 03:31:26.533804 33141 generic.go:334] "Generic (PLEG): container finished" podID="4711e21f-da6d-47ee-8722-64663e05de10" containerID="34bdcc1fe6a1c95721404567c2105c1c1fbc3c4b8fcdb91aba2994c23867fde9" exitCode=0 Mar 08 03:31:26.536297 master-0 kubenswrapper[33141]: I0308 03:31:26.536264 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:31:26.536599 master-0 kubenswrapper[33141]: I0308 03:31:26.536567 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" exitCode=1 Mar 08 03:31:26.536599 master-0 kubenswrapper[33141]: I0308 03:31:26.536589 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" exitCode=0 Mar 08 03:31:26.539850 master-0 kubenswrapper[33141]: I0308 03:31:26.538187 33141 generic.go:334] "Generic (PLEG): container finished" podID="42b9f2d1-da5c-46b5-b131-d206fa37d436" containerID="9ebffe5493b09d3a093aa85180c37071c3a0b4e8c5ef6f4c98982166c5ae432d" exitCode=0 Mar 08 03:31:26.553890 master-0 kubenswrapper[33141]: I0308 03:31:26.553833 33141 generic.go:334] "Generic (PLEG): container finished" podID="f2057f75-159d-4416-a234-050f0fe1afc9" containerID="440a29663d98c3dc23222b22803d7c93cc008176e47ed0828f4038b3d61a2b4c" exitCode=0 Mar 08 03:31:26.556166 master-0 kubenswrapper[33141]: I0308 03:31:26.556130 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-lssws_b537a655-ef73-40b5-b228-95ab6cfdedf2/machine-approver-controller/0.log" Mar 08 03:31:26.556471 master-0 kubenswrapper[33141]: I0308 03:31:26.556432 33141 generic.go:334] "Generic (PLEG): container finished" podID="b537a655-ef73-40b5-b228-95ab6cfdedf2" containerID="b2bf1f96c69abb910723e2ce05cf88ba62c29d23e19982dd55b5fdb8f01184e9" exitCode=255 Mar 08 03:31:26.564919 master-0 kubenswrapper[33141]: I0308 03:31:26.562496 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-8qznw_f8711b9f-3d18-4b8d-a263-2c9af9dc68a6/package-server-manager/1.log" Mar 08 03:31:26.564919 master-0 kubenswrapper[33141]: I0308 03:31:26.563003 33141 generic.go:334] "Generic (PLEG): container finished" podID="f8711b9f-3d18-4b8d-a263-2c9af9dc68a6" containerID="c86422caffa4210f8d2d79226aa71c0eb21bf5b4345acfa110f682a6a9383e9a" exitCode=1 Mar 08 03:31:26.568914 master-0 kubenswrapper[33141]: I0308 03:31:26.566078 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:31:26.568914 master-0 kubenswrapper[33141]: I0308 03:31:26.566113 33141 generic.go:334] "Generic (PLEG): container finished" podID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerID="6337e7946252e7bfd9c2e54f9544cec48f69509210920bb45fdd12f2048594e7" exitCode=1 Mar 08 03:31:26.573945 master-0 kubenswrapper[33141]: I0308 03:31:26.571002 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-wxrfp_89fc77c9-b444-4828-8a35-c63ea9335245/network-operator/2.log" Mar 08 03:31:26.573945 master-0 kubenswrapper[33141]: I0308 03:31:26.571081 33141 generic.go:334] "Generic (PLEG): container finished" podID="89fc77c9-b444-4828-8a35-c63ea9335245" containerID="2d1f35ff4fbf411febbede650e49c2bb74f638fdc3d27726c7043dd06f0d5e3d" exitCode=255 Mar 08 03:31:26.573945 master-0 kubenswrapper[33141]: I0308 03:31:26.572554 33141 generic.go:334] "Generic (PLEG): container finished" podID="7af634f0-65ac-402a-acd6-a8aad11b37ab" containerID="4ba849afa6c1096c68700ba2a3716f297bd7a9a7ae2cf94f600da7b5f14c3033" exitCode=0 Mar 08 03:31:26.579811 master-0 kubenswrapper[33141]: I0308 03:31:26.579764 33141 generic.go:334] "Generic (PLEG): container finished" podID="90ef7c0a-7c6f-45aa-865d-1e247110b265" containerID="5c0ec338f20c1d3f7f3579ad9e29304940d141e2ae52320c796bdc9c2392d2b5" exitCode=0 Mar 08 03:31:26.587926 master-0 kubenswrapper[33141]: I0308 03:31:26.587884 33141 generic.go:334] "Generic (PLEG): container finished" podID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerID="d2e9db5795871d92c7d2a7895a4e9d84c621a83e058c0b33df388b4e6b8eebdb" exitCode=0 Mar 08 03:31:26.595271 master-0 kubenswrapper[33141]: E0308 03:31:26.595234 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.604556 master-0 kubenswrapper[33141]: I0308 03:31:26.604525 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-69b6fc6b88-vjmf6_1fa64f1b-9f10-488b-8f94-1600774062c4/service-ca-operator/2.log" Mar 08 03:31:26.604674 master-0 kubenswrapper[33141]: I0308 03:31:26.604576 33141 generic.go:334] "Generic (PLEG): container finished" podID="1fa64f1b-9f10-488b-8f94-1600774062c4" containerID="c5943b694a77c0302101d6a324348e34a33f4a5d12b160d170755271c5624f54" exitCode=255 Mar 08 03:31:26.619683 master-0 kubenswrapper[33141]: I0308 03:31:26.619623 33141 generic.go:334] "Generic (PLEG): container finished" podID="e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d" containerID="1563150ee15a63a338caec1763c5794e6b7326c0a3188de3870365353993b8e5" exitCode=0 Mar 08 03:31:26.628398 master-0 kubenswrapper[33141]: I0308 03:31:26.628340 33141 generic.go:334] "Generic (PLEG): container finished" podID="beed862c-6283-4568-aa2e-f49b31e30a3b" containerID="d1050d392274bd46ce1eee6b5d4efe54cfd2cef89c6e2cd2b5d4626e3c237593" exitCode=0 Mar 08 03:31:26.635255 master-0 kubenswrapper[33141]: I0308 03:31:26.635202 33141 generic.go:334] "Generic (PLEG): container finished" podID="32a3f04f-05ea-4ee3-ac77-da375c39d104" containerID="d95366bbb45d1486da1389f6482624ab19b4c42be8cafcec08506d4ffd00d1c1" exitCode=0 Mar 08 03:31:26.635255 master-0 kubenswrapper[33141]: I0308 03:31:26.635239 33141 generic.go:334] "Generic (PLEG): container finished" podID="32a3f04f-05ea-4ee3-ac77-da375c39d104" containerID="2a16a4af1391388c9f3a8456384c6ebc73646aae055d7d3ffb5f00616c4c0d45" exitCode=0 Mar 08 03:31:26.637724 master-0 kubenswrapper[33141]: I0308 03:31:26.637690 33141 generic.go:334] "Generic (PLEG): container finished" podID="ea474cd1-8693-4505-9d6f-863d78776d11" containerID="f6fa734f9f31ac07e6ddecdab50d459bed27799d7ebf08ef0257f97b10bcd874" exitCode=0 Mar 08 03:31:26.637863 master-0 kubenswrapper[33141]: I0308 03:31:26.637845 33141 generic.go:334] "Generic (PLEG): container finished" podID="ea474cd1-8693-4505-9d6f-863d78776d11" containerID="24d2da5eecbea2601256f35d2117582419f13128e199a2ef407b84deab351231" exitCode=0 Mar 08 03:31:26.646294 master-0 kubenswrapper[33141]: I0308 03:31:26.646250 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/4.log" Mar 08 03:31:26.647027 master-0 kubenswrapper[33141]: I0308 03:31:26.646981 33141 generic.go:334] "Generic (PLEG): container finished" podID="197afe92-5912-4e90-a477-e3abe001bbc7" containerID="05444228b07f2531dcfc116cb1ae698869c129d21221e77d1dfea4921d9d08c4" exitCode=1 Mar 08 03:31:26.652206 master-0 kubenswrapper[33141]: E0308 03:31:26.652166 33141 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 03:31:26.657549 master-0 kubenswrapper[33141]: I0308 03:31:26.657503 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="78b54e7113882d3d58fadca33d022029333723850c915170784718d6b2d76fb0" exitCode=0 Mar 08 03:31:26.657549 master-0 kubenswrapper[33141]: I0308 03:31:26.657531 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c5b6441f57692234cdd23b54b466923a1bdca368557471aa9c56fb86e4cb27c5" exitCode=0 Mar 08 03:31:26.657549 master-0 kubenswrapper[33141]: I0308 03:31:26.657539 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="e69760dd587dd773054d2c68d80450fae7ea78d2c0d9ae71eb6479ccbfb89605" exitCode=0 Mar 08 03:31:26.657549 master-0 kubenswrapper[33141]: I0308 03:31:26.657548 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="23e3dd34f3f6fc9e0e38ff8f0cff6316ca3075b2e57bb67cfa5a7c613c4186a1" exitCode=0 Mar 08 03:31:26.657549 master-0 kubenswrapper[33141]: I0308 03:31:26.657555 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c9ed066ab454b7a45ceb4d194fe0690fb319c3957701da913065477256cffc60" exitCode=0 Mar 08 03:31:26.658508 master-0 kubenswrapper[33141]: I0308 03:31:26.657564 33141 generic.go:334] "Generic (PLEG): container finished" podID="d5eee869-c27f-4534-bbce-d954c42b36a3" containerID="c819f7232b6c404b174ef7e43a5fe243e69bdbd6f882a1b6a72687cf4603a3a5" exitCode=0 Mar 08 03:31:26.659673 master-0 kubenswrapper[33141]: I0308 03:31:26.659607 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 08 03:31:26.660365 master-0 kubenswrapper[33141]: I0308 03:31:26.660137 33141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d" exitCode=1 Mar 08 03:31:26.660449 master-0 kubenswrapper[33141]: I0308 03:31:26.660348 33141 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168" exitCode=0 Mar 08 03:31:26.672979 master-0 kubenswrapper[33141]: I0308 03:31:26.672894 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:31:26.673146 master-0 kubenswrapper[33141]: I0308 03:31:26.672993 33141 generic.go:334] "Generic (PLEG): container finished" podID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerID="2569a7eccce46264a4c7e0024d1b136ccb829cb434ec57e4613d364f065d0db9" exitCode=1 Mar 08 03:31:26.680160 master-0 kubenswrapper[33141]: I0308 03:31:26.680127 33141 generic.go:334] "Generic (PLEG): container finished" podID="d82cf0db-0891-482d-856b-1675843042dd" containerID="500c7b149f4f2f095cf355a9cad0c5ca80a3d389709c1ca8a3ccda38df4eb432" exitCode=0 Mar 08 03:31:26.682494 master-0 kubenswrapper[33141]: I0308 03:31:26.682458 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-h7lpf_0722d9c3-77b8-4770-9171-d4aeba4b0cc7/openshift-controller-manager-operator/2.log" Mar 08 03:31:26.682578 master-0 kubenswrapper[33141]: I0308 03:31:26.682503 33141 generic.go:334] "Generic (PLEG): container finished" podID="0722d9c3-77b8-4770-9171-d4aeba4b0cc7" containerID="5143cbadf379a54eeca92346f6f8d879538d415d4167dd1961c3f4a4dfe1810b" exitCode=255 Mar 08 03:31:26.685325 master-0 kubenswrapper[33141]: I0308 03:31:26.684924 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/1.log" Mar 08 03:31:26.685510 master-0 kubenswrapper[33141]: I0308 03:31:26.685466 33141 generic.go:334] "Generic (PLEG): container finished" podID="7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b" containerID="d67b7c07c51ae55685846daed44be4e4bc31d9601f7c2247d08f667ff264cd33" exitCode=1 Mar 08 03:31:26.688950 master-0 kubenswrapper[33141]: I0308 03:31:26.687476 33141 generic.go:334] "Generic (PLEG): container finished" podID="2a506cf6-bc39-4089-9caa-4c14c4d15c11" containerID="62e972b8bed8e15ecb54cf31905c8e961d34ba4506e8988ac047b3329919293e" exitCode=0 Mar 08 03:31:26.693049 master-0 kubenswrapper[33141]: I0308 03:31:26.693006 33141 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d" exitCode=0 Mar 08 03:31:26.694350 master-0 kubenswrapper[33141]: I0308 03:31:26.694107 33141 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91" exitCode=0 Mar 08 03:31:26.695502 master-0 kubenswrapper[33141]: E0308 03:31:26.695463 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.696395 master-0 kubenswrapper[33141]: I0308 03:31:26.696355 33141 generic.go:334] "Generic (PLEG): container finished" podID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerID="48906d4a9827177a4feca5f421bb263deddb2a2e07e0343746350be07efd8684" exitCode=0 Mar 08 03:31:26.714498 master-0 kubenswrapper[33141]: I0308 03:31:26.714449 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-ppdzb_4fd323ae-11bf-4207-bdce-4d51a9c19dc3/approver/1.log" Mar 08 03:31:26.714957 master-0 kubenswrapper[33141]: I0308 03:31:26.714851 33141 generic.go:334] "Generic (PLEG): container finished" podID="4fd323ae-11bf-4207-bdce-4d51a9c19dc3" containerID="7ee5b861c39dc6b2389534ffbe109ec1e2487bbf38c2ab8f456f84e12449168e" exitCode=1 Mar 08 03:31:26.727025 master-0 kubenswrapper[33141]: I0308 03:31:26.726979 33141 generic.go:334] "Generic (PLEG): container finished" podID="9d40fba7-84f0-46d7-9b49-dbba7aab20c5" containerID="3c3d9e33877d35a402198be63a50621dbf8be27a97d9c8596143b4df8d2863cd" exitCode=0 Mar 08 03:31:26.736660 master-0 kubenswrapper[33141]: I0308 03:31:26.736619 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_aea52bbe-5b64-45c7-8f8c-81d027f133d0/installer/0.log" Mar 08 03:31:26.736811 master-0 kubenswrapper[33141]: I0308 03:31:26.736663 33141 generic.go:334] "Generic (PLEG): container finished" podID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerID="15100ba27484610dbf9b61547d49ce1603f2d498f9b1453c4fbb68314939da8d" exitCode=1 Mar 08 03:31:26.742300 master-0 kubenswrapper[33141]: I0308 03:31:26.742272 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d446527-f3fd-4a37-a980-7445031928d1" containerID="f7da8d6f43578f41e1847ca0341da34176f025a0cb8ed318bf310486d31635fa" exitCode=0 Mar 08 03:31:26.745175 master-0 kubenswrapper[33141]: I0308 03:31:26.745160 33141 generic.go:334] "Generic (PLEG): container finished" podID="2468d2a3-ec65-4888-a86a-3f66fa311f56" containerID="f750a9def8422866b22d39a2cd3d196c793426a1bcfc147c9836ec1f7382a781" exitCode=0 Mar 08 03:31:26.747512 master-0 kubenswrapper[33141]: I0308 03:31:26.747488 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/0.log" Mar 08 03:31:26.747828 master-0 kubenswrapper[33141]: I0308 03:31:26.747795 33141 generic.go:334] "Generic (PLEG): container finished" podID="2ffe00fd-6834-4a5b-8b0b-b467d284f23c" containerID="2858485e79b00900bd163b6f7b2d0d61e9d6beabaa41767ec01d73da348ed50d" exitCode=255 Mar 08 03:31:26.751572 master-0 kubenswrapper[33141]: I0308 03:31:26.751548 33141 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="22f31e2b7f0321897dacca58338ef528e1d06507bc628197034c61c7576b258f" exitCode=0 Mar 08 03:31:26.751572 master-0 kubenswrapper[33141]: I0308 03:31:26.751568 33141 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" exitCode=2 Mar 08 03:31:26.753586 master-0 kubenswrapper[33141]: I0308 03:31:26.753375 33141 generic.go:334] "Generic (PLEG): container finished" podID="3d69f101-60a8-41fd-bcda-4eb654c626a2" containerID="c2ca8d040bfba75b786491a7f494a16b01e68ff5762368d65a86118d64a49cb6" exitCode=0 Mar 08 03:31:26.755507 master-0 kubenswrapper[33141]: I0308 03:31:26.755486 33141 generic.go:334] "Generic (PLEG): container finished" podID="82ee54a2-5967-4da7-940c-5200d7df098d" containerID="56b45cbe22a9ea31f9701b6616f25027fe9ee05239d29ec96e9726f45861602c" exitCode=0 Mar 08 03:31:26.755507 master-0 kubenswrapper[33141]: I0308 03:31:26.755506 33141 generic.go:334] "Generic (PLEG): container finished" podID="82ee54a2-5967-4da7-940c-5200d7df098d" containerID="9c94e7958c020b301758cb42ae87ec2c374c361307485925c4fcc17c93742009" exitCode=0 Mar 08 03:31:26.757121 master-0 kubenswrapper[33141]: I0308 03:31:26.757103 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/1.log" Mar 08 03:31:26.757189 master-0 kubenswrapper[33141]: I0308 03:31:26.757130 33141 generic.go:334] "Generic (PLEG): container finished" podID="103158c5-c99f-4224-bf5a-e23b1aaf9172" containerID="7828a0e0fa2706d250ad69378649c5fb641ba621ee124550bb4757af01298f2e" exitCode=1 Mar 08 03:31:26.759208 master-0 kubenswrapper[33141]: I0308 03:31:26.759171 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867" exitCode=0 Mar 08 03:31:26.760749 master-0 kubenswrapper[33141]: I0308 03:31:26.760710 33141 generic.go:334] "Generic (PLEG): container finished" podID="2728b91e-d59a-4e85-b245-0f297e9377f9" containerID="b4185e1d0f2f95c6a9df7b27b993524a8893ce06520676f0b8d760044b63fa25" exitCode=0 Mar 08 03:31:26.762196 master-0 kubenswrapper[33141]: I0308 03:31:26.762172 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-c74s2_399c5025-da66-4c52-8e68-ea6c996d9cc8/manager/1.log" Mar 08 03:31:26.762507 master-0 kubenswrapper[33141]: I0308 03:31:26.762476 33141 generic.go:334] "Generic (PLEG): container finished" podID="399c5025-da66-4c52-8e68-ea6c996d9cc8" containerID="1341190aa2856a973f485203a951081b82fd1c38dd7ccb12a11db05205beefcc" exitCode=1 Mar 08 03:31:26.810860 master-0 kubenswrapper[33141]: E0308 03:31:26.796261 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.896604 master-0 kubenswrapper[33141]: E0308 03:31:26.896568 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:26.996934 master-0 kubenswrapper[33141]: E0308 03:31:26.996896 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:27.055371 master-0 kubenswrapper[33141]: E0308 03:31:27.055248 33141 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 08 03:31:27.097209 master-0 kubenswrapper[33141]: E0308 03:31:27.097115 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:27.113105 master-0 kubenswrapper[33141]: I0308 03:31:27.113068 33141 manager.go:324] Recovery completed Mar 08 03:31:27.198140 master-0 kubenswrapper[33141]: E0308 03:31:27.198084 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:27.232292 master-0 kubenswrapper[33141]: I0308 03:31:27.232236 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.235753 master-0 kubenswrapper[33141]: I0308 03:31:27.235724 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.235861 master-0 kubenswrapper[33141]: I0308 03:31:27.235850 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.235940 master-0 kubenswrapper[33141]: I0308 03:31:27.235931 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.241460 master-0 kubenswrapper[33141]: I0308 03:31:27.241407 33141 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 03:31:27.241460 master-0 kubenswrapper[33141]: I0308 03:31:27.241455 33141 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 03:31:27.241661 master-0 kubenswrapper[33141]: I0308 03:31:27.241632 33141 state_mem.go:36] "Initialized new in-memory state store" Mar 08 03:31:27.242017 master-0 kubenswrapper[33141]: I0308 03:31:27.241982 33141 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 08 03:31:27.242058 master-0 kubenswrapper[33141]: I0308 03:31:27.242014 33141 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 08 03:31:27.242058 master-0 kubenswrapper[33141]: I0308 03:31:27.242045 33141 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 08 03:31:27.242058 master-0 kubenswrapper[33141]: I0308 03:31:27.242056 33141 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 08 03:31:27.242137 master-0 kubenswrapper[33141]: I0308 03:31:27.242068 33141 policy_none.go:49] "None policy: Start" Mar 08 03:31:27.245921 master-0 kubenswrapper[33141]: I0308 03:31:27.245864 33141 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 03:31:27.245982 master-0 kubenswrapper[33141]: I0308 03:31:27.245943 33141 state_mem.go:35] "Initializing new in-memory state store" Mar 08 03:31:27.246266 master-0 kubenswrapper[33141]: I0308 03:31:27.246235 33141 state_mem.go:75] "Updated machine memory state" Mar 08 03:31:27.246266 master-0 kubenswrapper[33141]: I0308 03:31:27.246261 33141 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 08 03:31:27.299110 master-0 kubenswrapper[33141]: E0308 03:31:27.299039 33141 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 08 03:31:27.312918 master-0 kubenswrapper[33141]: I0308 03:31:27.310457 33141 manager.go:334] "Starting Device Plugin manager" Mar 08 03:31:27.312918 master-0 kubenswrapper[33141]: I0308 03:31:27.310545 33141 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 03:31:27.312918 master-0 kubenswrapper[33141]: I0308 03:31:27.310570 33141 server.go:79] "Starting device plugin registration server" Mar 08 03:31:27.312918 master-0 kubenswrapper[33141]: I0308 03:31:27.311030 33141 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 03:31:27.312918 master-0 kubenswrapper[33141]: I0308 03:31:27.311044 33141 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 03:31:27.313553 master-0 kubenswrapper[33141]: I0308 03:31:27.313529 33141 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 03:31:27.313779 master-0 kubenswrapper[33141]: I0308 03:31:27.313769 33141 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 03:31:27.313882 master-0 kubenswrapper[33141]: I0308 03:31:27.313872 33141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 03:31:27.328015 master-0 kubenswrapper[33141]: E0308 03:31:27.326444 33141 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 08 03:31:27.414194 master-0 kubenswrapper[33141]: I0308 03:31:27.414106 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.419931 master-0 kubenswrapper[33141]: I0308 03:31:27.417058 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.419931 master-0 kubenswrapper[33141]: I0308 03:31:27.417093 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.419931 master-0 kubenswrapper[33141]: I0308 03:31:27.417101 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.419931 master-0 kubenswrapper[33141]: I0308 03:31:27.417123 33141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:31:27.770461 master-0 kubenswrapper[33141]: I0308 03:31:27.770421 33141 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="bf4fabb9c08963210bf1f0d197a394d399879939569bdcc3789dd4b487cec36f" exitCode=0 Mar 08 03:31:27.856251 master-0 kubenswrapper[33141]: I0308 03:31:27.856139 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 08 03:31:27.856496 master-0 kubenswrapper[33141]: I0308 03:31:27.856335 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.859398 master-0 kubenswrapper[33141]: I0308 03:31:27.859365 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.859521 master-0 kubenswrapper[33141]: I0308 03:31:27.859507 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.859619 master-0 kubenswrapper[33141]: I0308 03:31:27.859605 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.859850 master-0 kubenswrapper[33141]: I0308 03:31:27.859834 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.860025 master-0 kubenswrapper[33141]: I0308 03:31:27.859998 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.862742 master-0 kubenswrapper[33141]: I0308 03:31:27.862705 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.862837 master-0 kubenswrapper[33141]: I0308 03:31:27.862752 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.862837 master-0 kubenswrapper[33141]: I0308 03:31:27.862765 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.863135 master-0 kubenswrapper[33141]: I0308 03:31:27.863117 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.863247 master-0 kubenswrapper[33141]: I0308 03:31:27.863233 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.863338 master-0 kubenswrapper[33141]: I0308 03:31:27.863325 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.863516 master-0 kubenswrapper[33141]: I0308 03:31:27.863498 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.863681 master-0 kubenswrapper[33141]: I0308 03:31:27.863653 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.866206 master-0 kubenswrapper[33141]: I0308 03:31:27.866171 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.866284 master-0 kubenswrapper[33141]: I0308 03:31:27.866214 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.866284 master-0 kubenswrapper[33141]: I0308 03:31:27.866224 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.867274 master-0 kubenswrapper[33141]: I0308 03:31:27.867251 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.867274 master-0 kubenswrapper[33141]: I0308 03:31:27.867281 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.867398 master-0 kubenswrapper[33141]: I0308 03:31:27.867290 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.867398 master-0 kubenswrapper[33141]: I0308 03:31:27.867380 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.867637 master-0 kubenswrapper[33141]: I0308 03:31:27.867618 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:27.867731 master-0 kubenswrapper[33141]: I0308 03:31:27.867717 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.869494 master-0 kubenswrapper[33141]: I0308 03:31:27.869470 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.869494 master-0 kubenswrapper[33141]: I0308 03:31:27.869495 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.869614 master-0 kubenswrapper[33141]: I0308 03:31:27.869506 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.869614 master-0 kubenswrapper[33141]: I0308 03:31:27.869601 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.869779 master-0 kubenswrapper[33141]: I0308 03:31:27.869756 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.870479 master-0 kubenswrapper[33141]: I0308 03:31:27.870457 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.870479 master-0 kubenswrapper[33141]: I0308 03:31:27.870481 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.870595 master-0 kubenswrapper[33141]: I0308 03:31:27.870491 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.871534 master-0 kubenswrapper[33141]: I0308 03:31:27.871498 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.871534 master-0 kubenswrapper[33141]: I0308 03:31:27.871526 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.871534 master-0 kubenswrapper[33141]: I0308 03:31:27.871538 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.871716 master-0 kubenswrapper[33141]: I0308 03:31:27.871652 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.872128 master-0 kubenswrapper[33141]: I0308 03:31:27.872055 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.872201 master-0 kubenswrapper[33141]: I0308 03:31:27.872125 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.872201 master-0 kubenswrapper[33141]: I0308 03:31:27.872156 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.872201 master-0 kubenswrapper[33141]: I0308 03:31:27.872169 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875426 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875451 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875468 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875634 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f48163433a800aeba4eb45dc8cedb1f723024dbb49945d8a5d3caa82f3778dc" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875651 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a708aa69cc052f931f58c87cb7019d54064fd8232a5208d8d5f9a13a69e77e36" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875702 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d6afb7859936c1ddfbc758d407202a95a5bbef900466cee55affce196b98b8b5"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875764 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d98f5fff29d6ff6e9274b1d7396d5c8c1488275b7a2421d6c1826cd6d6a98019"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875779 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"ca38d7ba924ac97567c848c4de9b85cf952ac808362ef46dc74a8e038161b464"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875788 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"eda1f9d06b58215a69c700807746c7a2bb59d9d2efe4a26dddc2ef461fe516fc"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875799 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"bbc358fa2def0911cc6a3fbdff1eaadd0b9f4c2ad7276bfbd2fbe9219f40e336"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875809 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"544467ed5f69544193975fd6c79144f61384cc33dfea4931ad4d22fe98a678ac"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875822 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"30c975c18b67e45ff1d2f959009eed3f5b14395b49fcf6b6934c0641639a5191"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875833 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"ec3ad0a8cb7c4967a852ed5f49ded9e632a837d89e4681c433e054f6efc7dd8c"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875848 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"9cfe782c9ff029928aff445d3583f6e6a05ba9a4632c234c96ec9b0f2402bfc5"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875862 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e69232ee32af2930950dbc1ce8dd12459189b96461d880072fd507e99455d62" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875897 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b797749641d447516f356d6b48bcc046c06d0d3a6ceeefc387a38da2d330845e" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875965 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875977 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875987 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875996 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.875894 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876017 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876061 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876074 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876013 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876108 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"58f21db0fa1eb017fe823a0691c0c2ecef386aab7abe2946fa7a3c24e39e3c68"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876150 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a3f99a1a7c1a58ad3307e4987c29356dde8b338b069ed85a0484f6cbe18d2c5" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876186 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876197 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876206 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876216 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876229 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"6c635212a8e9ee60477413d34dfb3c70","Type":"ContainerStarted","Data":"981e0f271702172a27daba182461095b8682ca12b72ed3f46de2b6751994f11f"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876241 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5228b99475d9080f8618d95d08696502b61174da99371fbe9bbbd7e3bda94150" Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876802 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"94f9825100c515930737671c9db902b97098151c7357d0a97122a599d22e13f1"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876818 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6641777c0515379fb5521281634350e0ba16889bd714d491e11bd483e3de969d"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876828 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"85f16f93cd690b5924a3bfd91c9387cfb9f04d71df5230de7d45bf3e26eb0168"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876838 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"0e61ec2701bfc25eb5be928b08cc38e792bd258a0029c05a51bd1e479e58f0e3"} Mar 08 03:31:27.876841 master-0 kubenswrapper[33141]: I0308 03:31:27.876868 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1629d870a431db24e184fef7d2d042da3102cfaa950212d16542cff7e837ad" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.876946 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a87f978dcf5066fede63e02fc606a7202218ed7b98595c93603193fba400bb" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.876994 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2ff48f65a67b3bbbb6b179a0933cc0168e98cece572d365f2988cd098c9b0b" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.877033 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f1c6c0636a4899d7b1fba463483019132e2775ba2d317a272e9611e9eb04fdb" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.877058 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942"} Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.877069 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867"} Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.877080 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"708fff129dc113f73aa37f475b4ae4bc5c5913ac215686fbff11aa81a810bb5e"} Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.878574 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.878653 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:27.880045 master-0 kubenswrapper[33141]: I0308 03:31:27.878666 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:32.279483 master-0 kubenswrapper[33141]: I0308 03:31:32.279407 33141 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 03:31:32.280484 master-0 kubenswrapper[33141]: I0308 03:31:32.280391 33141 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 03:31:32.283019 master-0 kubenswrapper[33141]: I0308 03:31:32.282726 33141 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 03:31:32.283573 master-0 kubenswrapper[33141]: I0308 03:31:32.283499 33141 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 03:31:32.288454 master-0 kubenswrapper[33141]: I0308 03:31:32.288373 33141 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 08 03:31:32.388626 master-0 kubenswrapper[33141]: I0308 03:31:32.388554 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.388626 master-0 kubenswrapper[33141]: I0308 03:31:32.388611 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.388626 master-0 kubenswrapper[33141]: I0308 03:31:32.388635 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388656 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388677 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388695 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388713 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388731 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388750 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388768 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388789 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388809 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388827 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388845 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388866 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388884 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388925 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388946 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.388949 master-0 kubenswrapper[33141]: I0308 03:31:32.388966 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.389668 master-0 kubenswrapper[33141]: I0308 03:31:32.388985 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.456821 master-0 kubenswrapper[33141]: E0308 03:31:32.456578 33141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490120 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490222 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490279 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490350 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490400 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490452 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490546 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490642 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490660 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490687 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490718 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490760 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490765 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490798 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490827 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490857 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.490836 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491064 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491163 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491227 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491279 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491378 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491430 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491483 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491534 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.492188 master-0 kubenswrapper[33141]: I0308 03:31:32.491652 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492438 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492499 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492524 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492551 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492569 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.493493 master-0 kubenswrapper[33141]: I0308 03:31:32.492595 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.501514 master-0 kubenswrapper[33141]: I0308 03:31:32.501431 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.501614 master-0 kubenswrapper[33141]: I0308 03:31:32.501550 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.501614 master-0 kubenswrapper[33141]: I0308 03:31:32.501604 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.501721 master-0 kubenswrapper[33141]: I0308 03:31:32.501640 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.501721 master-0 kubenswrapper[33141]: I0308 03:31:32.501715 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.501815 master-0 kubenswrapper[33141]: I0308 03:31:32.501782 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.501865 master-0 kubenswrapper[33141]: I0308 03:31:32.501843 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.501954 master-0 kubenswrapper[33141]: I0308 03:31:32.501891 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.657421 master-0 kubenswrapper[33141]: I0308 03:31:32.657289 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:32.659956 master-0 kubenswrapper[33141]: I0308 03:31:32.659924 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:32.660064 master-0 kubenswrapper[33141]: I0308 03:31:32.659967 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:32.660064 master-0 kubenswrapper[33141]: I0308 03:31:32.659978 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:32.664029 master-0 kubenswrapper[33141]: I0308 03:31:32.663981 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.665012 master-0 kubenswrapper[33141]: I0308 03:31:32.664982 33141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:31:32.668045 master-0 kubenswrapper[33141]: I0308 03:31:32.668019 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.668607 master-0 kubenswrapper[33141]: I0308 03:31:32.668341 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:32.669158 master-0 kubenswrapper[33141]: E0308 03:31:32.669099 33141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 03:31:32.674314 master-0 kubenswrapper[33141]: I0308 03:31:32.671008 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:32.682831 master-0 kubenswrapper[33141]: I0308 03:31:32.682464 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:32.710231 master-0 kubenswrapper[33141]: W0308 03:31:32.710157 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-4f3f53866fd6d9919b43af28a5b50ad71b0989e37aecb9f4818b170bba810dab WatchSource:0}: Error finding container 4f3f53866fd6d9919b43af28a5b50ad71b0989e37aecb9f4818b170bba810dab: Status 404 returned error can't find the container with id 4f3f53866fd6d9919b43af28a5b50ad71b0989e37aecb9f4818b170bba810dab Mar 08 03:31:32.811279 master-0 kubenswrapper[33141]: I0308 03:31:32.810061 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"4f3f53866fd6d9919b43af28a5b50ad71b0989e37aecb9f4818b170bba810dab"} Mar 08 03:31:32.830542 master-0 kubenswrapper[33141]: E0308 03:31:32.830489 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:32.830860 master-0 kubenswrapper[33141]: E0308 03:31:32.830826 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:32.831373 master-0 kubenswrapper[33141]: E0308 03:31:32.831341 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:33.071446 master-0 kubenswrapper[33141]: I0308 03:31:33.069305 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:33.072066 master-0 kubenswrapper[33141]: I0308 03:31:33.072026 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:33.072151 master-0 kubenswrapper[33141]: I0308 03:31:33.072078 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:33.072151 master-0 kubenswrapper[33141]: I0308 03:31:33.072098 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:33.072318 master-0 kubenswrapper[33141]: I0308 03:31:33.072279 33141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:31:33.077937 master-0 kubenswrapper[33141]: E0308 03:31:33.075030 33141 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Mar 08 03:31:33.279062 master-0 kubenswrapper[33141]: I0308 03:31:33.279005 33141 apiserver.go:52] "Watching apiserver" Mar 08 03:31:33.314623 master-0 kubenswrapper[33141]: I0308 03:31:33.314574 33141 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 03:31:33.316311 master-0 kubenswrapper[33141]: I0308 03:31:33.316239 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n","openshift-controller-manager/controller-manager-75cd54f7f-2bg6l","openshift-etcd/etcd-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2","openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw","openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg","openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv","openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59","openshift-multus/multus-admission-controller-7769569c45-lxr7s","openshift-service-ca/service-ca-84bfdbbb7f-jnpl5","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9","openshift-ingress/router-default-79f8cd6fdd-tkxj9","openshift-kube-apiserver/installer-3-master-0","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844","openshift-dns/dns-default-p6kjc","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr","openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j","openshift-kube-controller-manager/installer-1-master-0","openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn","openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx","assisted-installer/assisted-installer-controller-rtvl6","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc","openshift-network-operator/network-operator-7c649bf6d4-wxrfp","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww","openshift-machine-config-operator/machine-config-server-fstmq","openshift-monitoring/metrics-server-6977dfbb45-dwjx9","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw","openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86","openshift-dns-operator/dns-operator-589895fbb7-9mhwc","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/redhat-operators-4h9n9","openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh","openshift-dns/node-resolver-mps4n","openshift-kube-controller-manager/installer-2-retry-1-master-0","openshift-network-diagnostics/network-check-target-4lx8s","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx","openshift-network-operator/iptables-alerter-fpxrc","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2","openshift-ingress-canary/ingress-canary-fhncs","openshift-ingress-operator/ingress-operator-677db989d6-4bpl8","openshift-insights/insights-operator-8f89dfddd-9l8dc","openshift-marketplace/community-operators-82rfr","openshift-multus/multus-jzw4f","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp","openshift-cluster-node-tuning-operator/tuned-qjpkx","openshift-monitoring/node-exporter-sjs7q","openshift-oauth-apiserver/apiserver-7b545788fb-82rjl","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch","openshift-ovn-kubernetes/ovnkube-node-jq7bv","openshift-apiserver/apiserver-5bf974f84f-hzx44","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt","openshift-etcd/installer-1-master-0","openshift-kube-apiserver/installer-3-retry-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6","openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b","openshift-marketplace/certified-operators-r97mb","openshift-marketplace/redhat-marketplace-k6hg9","openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/machine-config-daemon-xv682","openshift-network-node-identity/network-node-identity-ppdzb","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws","openshift-etcd/installer-2-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf","openshift-multus/multus-additional-cni-plugins-c8gc6","openshift-multus/network-metrics-daemon-2l64n"] Mar 08 03:31:33.318265 master-0 kubenswrapper[33141]: I0308 03:31:33.318231 33141 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="236d9cf9-abe3-4808-9165-06e61cadf867" Mar 08 03:31:33.319034 master-0 kubenswrapper[33141]: I0308 03:31:33.319000 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-rtvl6" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.324551 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326148 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326177 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326331 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326353 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326573 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326807 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 03:31:33.326936 master-0 kubenswrapper[33141]: I0308 03:31:33.326926 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 03:31:33.327410 master-0 kubenswrapper[33141]: I0308 03:31:33.327015 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.327410 master-0 kubenswrapper[33141]: I0308 03:31:33.327310 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.327410 master-0 kubenswrapper[33141]: I0308 03:31:33.327408 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.327531 master-0 kubenswrapper[33141]: I0308 03:31:33.327524 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.330945 master-0 kubenswrapper[33141]: I0308 03:31:33.327649 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:31:33.330945 master-0 kubenswrapper[33141]: I0308 03:31:33.327756 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.330945 master-0 kubenswrapper[33141]: I0308 03:31:33.327849 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.335853 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.335928 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.336145 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.336246 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.336366 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 03:31:33.336624 master-0 kubenswrapper[33141]: I0308 03:31:33.336545 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:31:33.336959 master-0 kubenswrapper[33141]: I0308 03:31:33.336663 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.336959 master-0 kubenswrapper[33141]: I0308 03:31:33.336846 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 03:31:33.337075 master-0 kubenswrapper[33141]: I0308 03:31:33.337052 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 03:31:33.344672 master-0 kubenswrapper[33141]: I0308 03:31:33.344623 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 08 03:31:33.359156 master-0 kubenswrapper[33141]: I0308 03:31:33.359119 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 08 03:31:33.360959 master-0 kubenswrapper[33141]: I0308 03:31:33.360928 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 03:31:33.361102 master-0 kubenswrapper[33141]: I0308 03:31:33.361080 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 03:31:33.361245 master-0 kubenswrapper[33141]: I0308 03:31:33.361220 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 03:31:33.361473 master-0 kubenswrapper[33141]: I0308 03:31:33.361280 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.361575 master-0 kubenswrapper[33141]: I0308 03:31:33.361557 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 03:31:33.361746 master-0 kubenswrapper[33141]: I0308 03:31:33.361718 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 03:31:33.362060 master-0 kubenswrapper[33141]: I0308 03:31:33.361832 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 03:31:33.362060 master-0 kubenswrapper[33141]: I0308 03:31:33.361969 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.362170 master-0 kubenswrapper[33141]: I0308 03:31:33.362134 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 03:31:33.362291 master-0 kubenswrapper[33141]: I0308 03:31:33.362275 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 03:31:33.362450 master-0 kubenswrapper[33141]: I0308 03:31:33.362427 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 03:31:33.362529 master-0 kubenswrapper[33141]: I0308 03:31:33.362513 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 03:31:33.368195 master-0 kubenswrapper[33141]: I0308 03:31:33.365829 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 03:31:33.368195 master-0 kubenswrapper[33141]: I0308 03:31:33.366380 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 03:31:33.368195 master-0 kubenswrapper[33141]: I0308 03:31:33.366505 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 03:31:33.372059 master-0 kubenswrapper[33141]: I0308 03:31:33.371248 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 08 03:31:33.372353 master-0 kubenswrapper[33141]: I0308 03:31:33.372325 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 03:31:33.372721 master-0 kubenswrapper[33141]: I0308 03:31:33.372684 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 03:31:33.372796 master-0 kubenswrapper[33141]: I0308 03:31:33.372751 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 03:31:33.372796 master-0 kubenswrapper[33141]: I0308 03:31:33.372704 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 03:31:33.372984 master-0 kubenswrapper[33141]: I0308 03:31:33.372882 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 03:31:33.373050 master-0 kubenswrapper[33141]: I0308 03:31:33.373027 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 03:31:33.373132 master-0 kubenswrapper[33141]: I0308 03:31:33.373116 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 03:31:33.373213 master-0 kubenswrapper[33141]: I0308 03:31:33.373197 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374051 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374088 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374163 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374244 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374276 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374368 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374433 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.374449 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.375529 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.375607 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.375683 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.375778 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.375848 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.376619 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.376951 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.377730 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.378103 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.379010 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.379155 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.379386 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.380469 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.380596 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-fm6df" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.380821 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.380957 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.381059 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.381183 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382252 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382365 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382539 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382630 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382777 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 03:31:33.388502 master-0 kubenswrapper[33141]: I0308 03:31:33.382890 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.390576 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.382944 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.382992 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.383011 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.383037 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.383435 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.384417 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 03:31:33.390631 master-0 kubenswrapper[33141]: I0308 03:31:33.384498 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 03:31:33.392457 master-0 kubenswrapper[33141]: I0308 03:31:33.383316 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 08 03:31:33.392457 master-0 kubenswrapper[33141]: I0308 03:31:33.391549 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 03:31:33.392457 master-0 kubenswrapper[33141]: I0308 03:31:33.383791 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 08 03:31:33.395877 master-0 kubenswrapper[33141]: I0308 03:31:33.383209 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 08 03:31:33.395877 master-0 kubenswrapper[33141]: I0308 03:31:33.383260 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 08 03:31:33.395877 master-0 kubenswrapper[33141]: I0308 03:31:33.383132 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.401062 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.401745 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.401989 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.402134 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.402261 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.402792 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.403954 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.404747 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.405036 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.405296 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.405344 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.405357 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.405648 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.410886 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.414797 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.415007 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhc2q\" (UniqueName: \"kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q\") pod \"network-check-source-7c67b67d47-6bd2j\" (UID: \"c474b370-c291-4662-b57c-a20f77931c1b\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.415233 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.415581 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416087 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-catalog-content\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416350 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416472 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416491 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjkw\" (UniqueName: \"kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416629 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.416652 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:33.417289 master-0 kubenswrapper[33141]: I0308 03:31:33.417054 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa64f1b-9f10-488b-8f94-1600774062c4-serving-cert\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:33.418206 master-0 kubenswrapper[33141]: I0308 03:31:33.417475 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.418206 master-0 kubenswrapper[33141]: I0308 03:31:33.417820 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.418206 master-0 kubenswrapper[33141]: I0308 03:31:33.418050 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.418334 master-0 kubenswrapper[33141]: I0308 03:31:33.418219 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knc57\" (UniqueName: \"kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:33.418399 master-0 kubenswrapper[33141]: I0308 03:31:33.418377 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:33.426040 master-0 kubenswrapper[33141]: I0308 03:31:33.418926 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:31:33.426040 master-0 kubenswrapper[33141]: I0308 03:31:33.419728 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mbg2\" (UniqueName: \"kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:31:33.426040 master-0 kubenswrapper[33141]: I0308 03:31:33.423373 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:33.426040 master-0 kubenswrapper[33141]: I0308 03:31:33.425584 33141 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 08 03:31:33.428979 master-0 kubenswrapper[33141]: I0308 03:31:33.428627 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 03:31:33.428979 master-0 kubenswrapper[33141]: I0308 03:31:33.419973 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa64f1b-9f10-488b-8f94-1600774062c4-config\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:33.429115 master-0 kubenswrapper[33141]: I0308 03:31:33.429043 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:33.429115 master-0 kubenswrapper[33141]: I0308 03:31:33.429084 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:31:33.429206 master-0 kubenswrapper[33141]: I0308 03:31:33.429115 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:33.429206 master-0 kubenswrapper[33141]: I0308 03:31:33.429176 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:33.429206 master-0 kubenswrapper[33141]: I0308 03:31:33.429201 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:33.429287 master-0 kubenswrapper[33141]: I0308 03:31:33.429234 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.429287 master-0 kubenswrapper[33141]: I0308 03:31:33.429265 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.429384 master-0 kubenswrapper[33141]: I0308 03:31:33.429292 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.433246 master-0 kubenswrapper[33141]: I0308 03:31:33.429847 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32a3f04f-05ea-4ee3-ac77-da375c39d104-utilities\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:33.433246 master-0 kubenswrapper[33141]: I0308 03:31:33.430235 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/89fc77c9-b444-4828-8a35-c63ea9335245-metrics-tls\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.449113 master-0 kubenswrapper[33141]: I0308 03:31:33.446356 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 03:31:33.471239 master-0 kubenswrapper[33141]: I0308 03:31:33.468182 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:31:33.491563 master-0 kubenswrapper[33141]: I0308 03:31:33.490273 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:31:33.502413 master-0 kubenswrapper[33141]: I0308 03:31:33.502374 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 03:31:33.523087 master-0 kubenswrapper[33141]: I0308 03:31:33.523039 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540587 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540665 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540719 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540746 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540795 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540813 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540828 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540862 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540878 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzgg5\" (UniqueName: \"kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540897 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540938 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540954 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.540978 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541017 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541038 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541053 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541091 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541114 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541129 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541163 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541181 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541197 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrq96\" (UniqueName: \"kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541213 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541249 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zrr\" (UniqueName: \"kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541265 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541286 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541322 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541339 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541359 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541396 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541427 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qkq\" (UniqueName: \"kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541447 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541487 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541503 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541519 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541554 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541580 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tj8l\" (UniqueName: \"kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l\") pod \"migrator-57ccdf9b5-rrfg6\" (UID: \"3c336192-80ee-4d53-a4ec-710cba95fac6\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541596 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541634 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttwx8\" (UniqueName: \"kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541652 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541667 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541683 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541722 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541739 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541755 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541787 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541806 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541822 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541840 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541859 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541897 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr9bw\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541927 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541944 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541962 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.541986 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542000 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542037 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdpq\" (UniqueName: \"kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542054 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542070 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542087 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.542035 master-0 kubenswrapper[33141]: I0308 03:31:33.542104 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542131 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqrn6\" (UniqueName: \"kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542147 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542194 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c72dm\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542210 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542249 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542268 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542287 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542305 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rtt8\" (UniqueName: \"kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542321 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cgc\" (UniqueName: \"kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542338 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542360 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542381 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542396 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542412 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542428 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542453 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542469 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542484 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542499 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542521 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542546 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542568 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542592 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8l6s\" (UniqueName: \"kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542632 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542652 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542670 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542688 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542704 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542724 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542749 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542766 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542789 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542806 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542823 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542841 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542860 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542876 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542891 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542925 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542941 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542958 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542975 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.542994 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543010 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543027 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fw25\" (UniqueName: \"kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543046 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543063 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543079 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543097 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543227 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543269 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543441 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543475 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d2a53f3b-7e22-47eb-9f28-da3441b3662f-service-ca\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543490 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543716 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543803 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-catalog-content\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543815 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-catalog-content\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543841 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543881 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.543986 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.544078 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/2728b91e-d59a-4e85-b245-0f297e9377f9-snapshots\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.544237 master-0 kubenswrapper[33141]: I0308 03:31:33.544282 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544301 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efd90b06-2733-4086-8d70-b9aed3f7c5fa-utilities\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544455 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-whereabouts-configmap\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544464 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-config\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544557 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/aadf7b67-db33-4392-81f5-1b93eef54545-host-slash\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544640 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89e15db4-c541-4d53-878d-706fa022f970-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544696 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-tmp\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544728 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544827 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-cni-binary-copy\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544833 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544868 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544893 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a1b7b0d-6e00-485e-86e8-7bd047569328-tmpfs\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.544938 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545020 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-utilities\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545028 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed56c17f-7e15-4776-80a6-3ef091307e89-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545067 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-config\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545188 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/197afe92-5912-4e90-a477-e3abe001bbc7-metrics-tls\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545218 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545258 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545284 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gf5\" (UniqueName: \"kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545301 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545319 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545322 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d446527-f3fd-4a37-a980-7445031928d1-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545389 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/399c5025-da66-4c52-8e68-ea6c996d9cc8-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545415 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-config\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545450 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545484 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/89fc77c9-b444-4828-8a35-c63ea9335245-host-etc-kube\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545495 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1bcaff-7dbd-4559-92fc-5453993f643e-serving-cert\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545500 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545565 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545602 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545636 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545688 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545719 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmdmd\" (UniqueName: \"kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545746 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545798 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545927 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.545992 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546013 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546049 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-serving-ca\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546058 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546106 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546328 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-serving-cert\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546359 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546377 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546417 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546449 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546474 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkckt\" (UniqueName: \"kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.546521 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.547242 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a506cf6-bc39-4089-9caa-4c14c4d15c11-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.547950 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548018 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548049 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42fg\" (UniqueName: \"kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548082 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548106 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548128 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548150 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548186 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548213 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548237 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548254 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548270 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548286 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548303 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548320 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548337 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548354 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548379 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548408 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548433 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548467 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548492 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548516 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548538 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548561 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548782 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-cabundle\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548854 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-encryption-config\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.548863 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4711e21f-da6d-47ee-8722-64663e05de10-operand-assets\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549077 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549142 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea474cd1-8693-4505-9d6f-863d78776d11-catalog-content\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549211 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549317 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-serving-cert\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549358 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549438 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549459 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549470 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-client\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549474 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549511 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549538 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkp89\" (UniqueName: \"kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549556 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549578 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549596 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549615 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549631 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549668 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549688 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549742 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549760 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549771 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549776 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549814 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549830 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549849 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549876 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549897 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549936 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549961 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549975 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.549979 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550010 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550028 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550048 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550209 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d5eee869-c27f-4534-bbce-d954c42b36a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550295 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/103158c5-c99f-4224-bf5a-e23b1aaf9172-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550416 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d68278f6-59d5-4bbf-b969-e47635ffd4cc-srv-cert\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550452 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550465 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f6ee6202-11e5-4586-ae46-075da1ad7f1a-metrics-certs\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550472 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550491 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550507 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550527 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28tv\" (UniqueName: \"kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550542 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550560 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550580 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjzs5\" (UniqueName: \"kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550599 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550617 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550637 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550655 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550671 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/197afe92-5912-4e90-a477-e3abe001bbc7-trusted-ca\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550677 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:33.552401 master-0 kubenswrapper[33141]: I0308 03:31:33.550709 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550736 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550757 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550778 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwdh\" (UniqueName: \"kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550797 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550815 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550834 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9vkx\" (UniqueName: \"kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550851 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550869 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550870 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-metrics-tls\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550886 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550907 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550939 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550957 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550975 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.550999 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551017 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551174 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-config\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551289 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-trusted-ca-bundle\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551408 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ef7c0a-7c6f-45aa-865d-1e247110b265-serving-cert\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551508 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a058138-8039-4841-821b-7ee5bb8648e4-serving-cert\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551520 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551540 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551560 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551577 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551594 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551613 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551629 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a058138-8039-4841-821b-7ee5bb8648e4-config\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551631 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxcml\" (UniqueName: \"kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551660 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551681 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551703 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8xq\" (UniqueName: \"kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq\") pod \"csi-snapshot-controller-7577d6f48-kfmd9\" (UID: \"9fb588a9-6240-4513-8e4b-248eb43d3f06\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551722 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551742 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gcw\" (UniqueName: \"kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551759 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551777 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551794 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551811 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551829 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551862 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551883 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d446527-f3fd-4a37-a980-7445031928d1-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.551882 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552484 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552527 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6wb\" (UniqueName: \"kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552548 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552570 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552588 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552607 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552625 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552646 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552665 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm9tk\" (UniqueName: \"kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552685 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552687 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552702 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552762 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82ee54a2-5967-4da7-940c-5200d7df098d-utilities\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552793 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552817 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552834 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552854 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4t2j\" (UniqueName: \"kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552878 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552897 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552916 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-daemon-config\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.552930 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553132 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-env-overrides\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553204 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553362 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553519 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/103158c5-c99f-4224-bf5a-e23b1aaf9172-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553556 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89e15db4-c541-4d53-878d-706fa022f970-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553658 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-textfile\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553849 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a506cf6-bc39-4089-9caa-4c14c4d15c11-config\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553945 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.553967 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2a53f3b-7e22-47eb-9f28-da3441b3662f-serving-cert\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.554168 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ed56c17f-7e15-4776-80a6-3ef091307e89-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.554572 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.554638 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bd1bcaff-7dbd-4559-92fc-5453993f643e-available-featuregates\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558617 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4711e21f-da6d-47ee-8722-64663e05de10-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558752 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558798 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558822 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558843 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558865 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558890 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4tj\" (UniqueName: \"kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558927 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558944 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558964 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.558983 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559004 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559023 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctdbq\" (UniqueName: \"kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559042 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559143 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559421 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-policies\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559608 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7af634f0-65ac-402a-acd6-a8aad11b37ab-signing-key\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559615 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-etcd-client\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559761 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-tuned\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559788 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559825 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559984 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90ef7c0a-7c6f-45aa-865d-1e247110b265-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.559997 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxhht\" (UniqueName: \"kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560065 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29sr\" (UniqueName: \"kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560126 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560153 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560186 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560342 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-etcd-ca\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.560347 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.562272 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.565439 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-webhook-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.569417 master-0 kubenswrapper[33141]: I0308 03:31:33.566657 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a1b7b0d-6e00-485e-86e8-7bd047569328-apiservice-cert\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:33.609936 master-0 kubenswrapper[33141]: I0308 03:31:33.595348 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 03:31:33.609936 master-0 kubenswrapper[33141]: I0308 03:31:33.600728 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 03:31:33.609936 master-0 kubenswrapper[33141]: I0308 03:31:33.607176 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.622028 master-0 kubenswrapper[33141]: I0308 03:31:33.620456 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mw5z6" Mar 08 03:31:33.641466 master-0 kubenswrapper[33141]: I0308 03:31:33.641320 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 03:31:33.646536 master-0 kubenswrapper[33141]: I0308 03:31:33.646497 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2468d2a3-ec65-4888-a86a-3f66fa311f56-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:33.661459 master-0 kubenswrapper[33141]: I0308 03:31:33.660907 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 03:31:33.662069 master-0 kubenswrapper[33141]: I0308 03:31:33.662017 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.662213 master-0 kubenswrapper[33141]: I0308 03:31:33.662179 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.662276 master-0 kubenswrapper[33141]: I0308 03:31:33.662243 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.662349 master-0 kubenswrapper[33141]: I0308 03:31:33.662326 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.662428 master-0 kubenswrapper[33141]: I0308 03:31:33.662405 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.662477 master-0 kubenswrapper[33141]: I0308 03:31:33.662438 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.662527 master-0 kubenswrapper[33141]: I0308 03:31:33.662516 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.662688 master-0 kubenswrapper[33141]: I0308 03:31:33.662657 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.662818 master-0 kubenswrapper[33141]: I0308 03:31:33.662790 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.662861 master-0 kubenswrapper[33141]: I0308 03:31:33.662842 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.662969 master-0 kubenswrapper[33141]: I0308 03:31:33.662947 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663002 master-0 kubenswrapper[33141]: I0308 03:31:33.662963 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-kubelet\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663030 master-0 kubenswrapper[33141]: I0308 03:31:33.663020 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663171 master-0 kubenswrapper[33141]: I0308 03:31:33.663059 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663171 master-0 kubenswrapper[33141]: I0308 03:31:33.663107 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663171 master-0 kubenswrapper[33141]: I0308 03:31:33.663129 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-lib-modules\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663171 master-0 kubenswrapper[33141]: I0308 03:31:33.663135 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.663171 master-0 kubenswrapper[33141]: I0308 03:31:33.663167 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-conf\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663189 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663209 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-run\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663218 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663250 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-node-pullsecrets\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663285 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663318 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663365 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663367 master-0 kubenswrapper[33141]: I0308 03:31:33.663368 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-netns\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663385 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663422 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-node-log\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663452 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663487 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-cnibin\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663490 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-log-socket\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663509 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663528 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-var-lib-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663567 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f520fbf8-9403-46bc-9381-226a3a1ed1c7-hosts-file\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:31:33.663577 master-0 kubenswrapper[33141]: I0308 03:31:33.663024 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-etc-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663595 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663599 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-hostroot\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663616 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-etc-kubernetes\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663649 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d2a53f3b-7e22-47eb-9f28-da3441b3662f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663666 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-slash\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663687 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663692 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663716 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-root\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663734 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-sys\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663742 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663782 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.663813 master-0 kubenswrapper[33141]: I0308 03:31:33.663798 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.663849 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-wtmp\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.663852 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysctl-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.663930 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.663972 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664019 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664037 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664063 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664090 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664143 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664169 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664199 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664240 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664265 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664296 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664327 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664345 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664361 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664456 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664472 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.664474 master-0 kubenswrapper[33141]: I0308 03:31:33.664490 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664508 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664563 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664604 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664641 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664660 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664683 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664702 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664738 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664756 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664807 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664855 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664947 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.664979 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.665027 master-0 kubenswrapper[33141]: I0308 03:31:33.665026 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665043 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665085 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665163 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665203 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665324 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-openvswitch\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665348 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-bin\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665383 master-0 kubenswrapper[33141]: I0308 03:31:33.665368 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2057f75-159d-4416-a234-050f0fe1afc9-audit-dir\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665397 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665407 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-kubernetes\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665428 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665444 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-netns\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665461 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-socket-dir-parent\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665475 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-host\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665487 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-multus-conf-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665525 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-run-ovn-kubernetes\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665526 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-multus-certs\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665544 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-systemd-units\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.665578 master-0 kubenswrapper[33141]: I0308 03:31:33.665566 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665596 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-sysconfig\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665618 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-run-k8s-cni-cncf-io\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665643 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-rootfs\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665664 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-ovn\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665686 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665698 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-modprobe-d\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665727 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-os-release\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665745 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-cni-netd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665761 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665788 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/399c5025-da66-4c52-8e68-ea6c996d9cc8-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665796 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-cnibin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665817 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665837 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-host-kubelet\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665863 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-bin\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665874 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-system-cni-dir\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665888 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-audit-dir\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665923 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/beed862c-6283-4568-aa2e-f49b31e30a3b-sys\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665957 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-etc-systemd\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665973 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a55bef81-2381-4036-b171-3dbc77e9c25d-host-var-lib-cni-multus\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.665985 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-run-systemd\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.666018 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-system-cni-dir\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.666024 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/5d29f16f-e26f-4b9d-a646-230316e936a8-var-lib-kubelet\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:33.666126 master-0 kubenswrapper[33141]: I0308 03:31:33.666058 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d5eee869-c27f-4534-bbce-d954c42b36a3-os-release\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:33.669493 master-0 kubenswrapper[33141]: I0308 03:31:33.669434 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2468d2a3-ec65-4888-a86a-3f66fa311f56-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:33.683317 master-0 kubenswrapper[33141]: I0308 03:31:33.683260 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 03:31:33.707037 master-0 kubenswrapper[33141]: I0308 03:31:33.707010 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 03:31:33.709343 master-0 kubenswrapper[33141]: I0308 03:31:33.709314 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d82cf0db-0891-482d-856b-1675843042dd-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.726099 master-0 kubenswrapper[33141]: I0308 03:31:33.725948 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 03:31:33.738563 master-0 kubenswrapper[33141]: I0308 03:31:33.737971 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5a92a557-d023-4531-b3a3-e559af0fe358-srv-cert\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:33.742426 master-0 kubenswrapper[33141]: I0308 03:31:33.742293 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 03:31:33.761311 master-0 kubenswrapper[33141]: I0308 03:31:33.761150 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 03:31:33.761311 master-0 kubenswrapper[33141]: I0308 03:31:33.761236 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d82cf0db-0891-482d-856b-1675843042dd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:33.793157 master-0 kubenswrapper[33141]: I0308 03:31:33.792972 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 03:31:33.810367 master-0 kubenswrapper[33141]: I0308 03:31:33.810189 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 03:31:33.820476 master-0 kubenswrapper[33141]: I0308 03:31:33.820419 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 03:31:33.822599 master-0 kubenswrapper[33141]: I0308 03:31:33.822512 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"29daacb2c26fcf18f9f3b673ab22e9e9aa0de4d9b19b229cdf38f36ca276b550"} Mar 08 03:31:33.822599 master-0 kubenswrapper[33141]: I0308 03:31:33.822551 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd"} Mar 08 03:31:33.822599 master-0 kubenswrapper[33141]: I0308 03:31:33.822562 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9"} Mar 08 03:31:33.822599 master-0 kubenswrapper[33141]: I0308 03:31:33.822570 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e"} Mar 08 03:31:33.824376 master-0 kubenswrapper[33141]: I0308 03:31:33.824215 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-webhook-cert\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.826001 master-0 kubenswrapper[33141]: I0308 03:31:33.825137 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304"} Mar 08 03:31:33.826001 master-0 kubenswrapper[33141]: I0308 03:31:33.825474 33141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:31:33.846710 master-0 kubenswrapper[33141]: I0308 03:31:33.842856 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 03:31:33.850752 master-0 kubenswrapper[33141]: I0308 03:31:33.850710 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:31:33.862521 master-0 kubenswrapper[33141]: I0308 03:31:33.861932 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 03:31:33.897074 master-0 kubenswrapper[33141]: I0308 03:31:33.893284 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-env-overrides\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.921983 master-0 kubenswrapper[33141]: I0308 03:31:33.921475 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 03:31:33.921983 master-0 kubenswrapper[33141]: I0308 03:31:33.921683 33141 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 03:31:33.921983 master-0 kubenswrapper[33141]: I0308 03:31:33.921853 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-ovnkube-identity-cm\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:33.940307 master-0 kubenswrapper[33141]: I0308 03:31:33.922270 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 03:31:33.940307 master-0 kubenswrapper[33141]: I0308 03:31:33.923224 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.951191 master-0 kubenswrapper[33141]: I0308 03:31:33.944182 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 08 03:31:33.951191 master-0 kubenswrapper[33141]: I0308 03:31:33.944238 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 08 03:31:33.951191 master-0 kubenswrapper[33141]: I0308 03:31:33.944247 33141 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 08 03:31:33.951191 master-0 kubenswrapper[33141]: I0308 03:31:33.944307 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 03:31:33.951191 master-0 kubenswrapper[33141]: I0308 03:31:33.946422 33141 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 08 03:31:33.967716 master-0 kubenswrapper[33141]: I0308 03:31:33.967178 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 03:31:33.990246 master-0 kubenswrapper[33141]: I0308 03:31:33.989814 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 03:31:33.998169 master-0 kubenswrapper[33141]: I0308 03:31:33.998126 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b090750-b893-42fe-8def-dfb3f4253d43-config-volume\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:34.000697 master-0 kubenswrapper[33141]: I0308 03:31:34.000660 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 03:31:34.006268 master-0 kubenswrapper[33141]: I0308 03:31:34.006231 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-images\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:34.021474 master-0 kubenswrapper[33141]: I0308 03:31:34.021322 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p5nps" Mar 08 03:31:34.040563 master-0 kubenswrapper[33141]: I0308 03:31:34.040463 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 03:31:34.071975 master-0 kubenswrapper[33141]: I0308 03:31:34.068808 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-cert\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:34.071975 master-0 kubenswrapper[33141]: I0308 03:31:34.070324 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 03:31:34.072699 master-0 kubenswrapper[33141]: I0308 03:31:34.072664 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:34.096314 master-0 kubenswrapper[33141]: I0308 03:31:34.092496 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 03:31:34.103931 master-0 kubenswrapper[33141]: I0308 03:31:34.102035 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-7hbhc" Mar 08 03:31:34.125935 master-0 kubenswrapper[33141]: I0308 03:31:34.121157 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-ftthh" Mar 08 03:31:34.125935 master-0 kubenswrapper[33141]: I0308 03:31:34.123992 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:34.142651 master-0 kubenswrapper[33141]: I0308 03:31:34.142597 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 03:31:34.154177 master-0 kubenswrapper[33141]: I0308 03:31:34.154125 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/38287d1a-b784-4ce9-9650-949d92469519-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:34.169092 master-0 kubenswrapper[33141]: I0308 03:31:34.169052 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 03:31:34.172541 master-0 kubenswrapper[33141]: I0308 03:31:34.172468 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38287d1a-b784-4ce9-9650-949d92469519-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:34.180786 master-0 kubenswrapper[33141]: I0308 03:31:34.180195 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 03:31:34.201117 master-0 kubenswrapper[33141]: I0308 03:31:34.201062 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-bhtmv" Mar 08 03:31:34.214801 master-0 kubenswrapper[33141]: I0308 03:31:34.214760 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") pod \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " Mar 08 03:31:34.215000 master-0 kubenswrapper[33141]: I0308 03:31:34.214842 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") pod \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " Mar 08 03:31:34.215602 master-0 kubenswrapper[33141]: I0308 03:31:34.215555 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock" (OuterVolumeSpecName: "var-lock") pod "e6716923-7f46-438f-9cc4-c0f071ca5b1a" (UID: "e6716923-7f46-438f-9cc4-c0f071ca5b1a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:31:34.215754 master-0 kubenswrapper[33141]: I0308 03:31:34.215734 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e6716923-7f46-438f-9cc4-c0f071ca5b1a" (UID: "e6716923-7f46-438f-9cc4-c0f071ca5b1a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:31:34.216030 master-0 kubenswrapper[33141]: I0308 03:31:34.216006 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:31:34.216030 master-0 kubenswrapper[33141]: I0308 03:31:34.216026 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:31:34.221569 master-0 kubenswrapper[33141]: I0308 03:31:34.221536 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 03:31:34.225072 master-0 kubenswrapper[33141]: I0308 03:31:34.225022 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:31:34.241530 master-0 kubenswrapper[33141]: I0308 03:31:34.241493 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 03:31:34.261698 master-0 kubenswrapper[33141]: I0308 03:31:34.261651 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dqqnp" Mar 08 03:31:34.281423 master-0 kubenswrapper[33141]: I0308 03:31:34.281369 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 03:31:34.287381 master-0 kubenswrapper[33141]: I0308 03:31:34.287352 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45212ce7-5f95-402e-93c4-83bac844f77d-config\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:34.301283 master-0 kubenswrapper[33141]: I0308 03:31:34.301237 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 03:31:34.308261 master-0 kubenswrapper[33141]: I0308 03:31:34.308215 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:34.320990 master-0 kubenswrapper[33141]: I0308 03:31:34.320936 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 03:31:34.331349 master-0 kubenswrapper[33141]: I0308 03:31:34.331312 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/45212ce7-5f95-402e-93c4-83bac844f77d-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:34.340708 master-0 kubenswrapper[33141]: I0308 03:31:34.340667 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 03:31:34.345702 master-0 kubenswrapper[33141]: I0308 03:31:34.345658 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9b090750-b893-42fe-8def-dfb3f4253d43-metrics-tls\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:34.357161 master-0 kubenswrapper[33141]: I0308 03:31:34.357120 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 08 03:31:34.360725 master-0 kubenswrapper[33141]: I0308 03:31:34.360691 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 03:31:34.379613 master-0 kubenswrapper[33141]: I0308 03:31:34.379553 33141 request.go:700] Waited for 1.010048248s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0 Mar 08 03:31:34.388263 master-0 kubenswrapper[33141]: I0308 03:31:34.388225 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 03:31:34.391837 master-0 kubenswrapper[33141]: I0308 03:31:34.391802 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:34.401191 master-0 kubenswrapper[33141]: I0308 03:31:34.401151 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 03:31:34.410665 master-0 kubenswrapper[33141]: I0308 03:31:34.410623 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-images\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:34.419771 master-0 kubenswrapper[33141]: E0308 03:31:34.419721 33141 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.419867 master-0 kubenswrapper[33141]: E0308 03:31:34.419842 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates podName:8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc nodeName:}" failed. No retries permitted until 2026-03-08 03:31:34.91981363 +0000 UTC m=+8.789706903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates") pod "prometheus-operator-admission-webhook-8464df8497-dfmh2" (UID: "8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.420947 master-0 kubenswrapper[33141]: I0308 03:31:34.420894 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 03:31:34.425727 master-0 kubenswrapper[33141]: I0308 03:31:34.425657 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/965f8eef-c5af-499b-b1db-cf63072781cc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:31:34.430114 master-0 kubenswrapper[33141]: E0308 03:31:34.430068 33141 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.430222 master-0 kubenswrapper[33141]: E0308 03:31:34.430197 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script podName:aadf7b67-db33-4392-81f5-1b93eef54545 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:34.930172699 +0000 UTC m=+8.800065892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script") pod "iptables-alerter-fpxrc" (UID: "aadf7b67-db33-4392-81f5-1b93eef54545") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.440685 master-0 kubenswrapper[33141]: I0308 03:31:34.440642 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rgflg" Mar 08 03:31:34.473937 master-0 kubenswrapper[33141]: I0308 03:31:34.471399 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 03:31:34.477925 master-0 kubenswrapper[33141]: I0308 03:31:34.475263 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2728b91e-d59a-4e85-b245-0f297e9377f9-serving-cert\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:34.485466 master-0 kubenswrapper[33141]: I0308 03:31:34.483221 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 03:31:34.485466 master-0 kubenswrapper[33141]: I0308 03:31:34.485309 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-images\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:34.503934 master-0 kubenswrapper[33141]: I0308 03:31:34.503436 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-wvdjh" Mar 08 03:31:34.525586 master-0 kubenswrapper[33141]: I0308 03:31:34.525270 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 03:31:34.527317 master-0 kubenswrapper[33141]: I0308 03:31:34.527275 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2728b91e-d59a-4e85-b245-0f297e9377f9-service-ca-bundle\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:34.541223 master-0 kubenswrapper[33141]: I0308 03:31:34.541184 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 03:31:34.544215 master-0 kubenswrapper[33141]: E0308 03:31:34.544171 33141 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544215 master-0 kubenswrapper[33141]: E0308 03:31:34.544202 33141 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544309 master-0 kubenswrapper[33141]: E0308 03:31:34.544172 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544309 master-0 kubenswrapper[33141]: E0308 03:31:34.544270 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044250849 +0000 UTC m=+8.914144042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544309 master-0 kubenswrapper[33141]: E0308 03:31:34.544177 33141 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544309 master-0 kubenswrapper[33141]: E0308 03:31:34.544288 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib podName:9d40fba7-84f0-46d7-9b49-dbba7aab20c5 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044282149 +0000 UTC m=+8.914175342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib") pod "ovnkube-node-jq7bv" (UID: "9d40fba7-84f0-46d7-9b49-dbba7aab20c5") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544309 master-0 kubenswrapper[33141]: E0308 03:31:34.544302 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca podName:beed862c-6283-4568-aa2e-f49b31e30a3b nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.04429683 +0000 UTC m=+8.914190023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca") pod "node-exporter-sjs7q" (UID: "beed862c-6283-4568-aa2e-f49b31e30a3b") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544316 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls podName:81abc17a-8a51-44e2-a5df-5ddb394a9fa6 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.04430939 +0000 UTC m=+8.914202583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls") pod "machine-config-operator-fdb5c78b5-qfbvt" (UID: "81abc17a-8a51-44e2-a5df-5ddb394a9fa6") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544331 33141 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544334 33141 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544353 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044348251 +0000 UTC m=+8.914241444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544366 33141 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544367 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs podName:daf9e0ac-b5a3-4a3e-aa57-31b810f634ef nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044360971 +0000 UTC m=+8.914254164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs") pod "multus-admission-controller-7769569c45-lxr7s" (UID: "daf9e0ac-b5a3-4a3e-aa57-31b810f634ef") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544453 master-0 kubenswrapper[33141]: E0308 03:31:34.544410 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044394772 +0000 UTC m=+8.914287965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544656 master-0 kubenswrapper[33141]: E0308 03:31:34.544489 33141 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544656 master-0 kubenswrapper[33141]: E0308 03:31:34.544517 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044509295 +0000 UTC m=+8.914402488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.544656 master-0 kubenswrapper[33141]: E0308 03:31:34.544537 33141 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.544656 master-0 kubenswrapper[33141]: E0308 03:31:34.544557 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls podName:16ca7ace-9608-4686-a039-a6ba6e3ab837 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.044552286 +0000 UTC m=+8.914445479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-wwmnn" (UID: "16ca7ace-9608-4686-a039-a6ba6e3ab837") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545309 master-0 kubenswrapper[33141]: E0308 03:31:34.545278 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545360 master-0 kubenswrapper[33141]: E0308 03:31:34.545319 33141 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545360 master-0 kubenswrapper[33141]: E0308 03:31:34.545342 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045323916 +0000 UTC m=+8.915217109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545360 master-0 kubenswrapper[33141]: E0308 03:31:34.545355 33141 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-da0kci31im4hq: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545366 33141 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545379 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045371557 +0000 UTC m=+8.915264750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545394 33141 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545399 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045391477 +0000 UTC m=+8.915284670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545418 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config podName:beed862c-6283-4568-aa2e-f49b31e30a3b nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045412378 +0000 UTC m=+8.915305571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config") pod "node-exporter-sjs7q" (UID: "beed862c-6283-4568-aa2e-f49b31e30a3b") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545432 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config podName:42b9f2d1-da5c-46b5-b131-d206fa37d436 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045426558 +0000 UTC m=+8.915319751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-27kjz" (UID: "42b9f2d1-da5c-46b5-b131-d206fa37d436") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545449 master-0 kubenswrapper[33141]: E0308 03:31:34.545448 33141 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545461 33141 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545471 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045465919 +0000 UTC m=+8.915359112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545483 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config podName:81abc17a-8a51-44e2-a5df-5ddb394a9fa6 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045477949 +0000 UTC m=+8.915371142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-qfbvt" (UID: "81abc17a-8a51-44e2-a5df-5ddb394a9fa6") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545506 33141 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545548 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle podName:e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045519741 +0000 UTC m=+8.915412934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle") pod "router-default-79f8cd6fdd-tkxj9" (UID: "e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545581 33141 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545602 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images podName:e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045596322 +0000 UTC m=+8.915489515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" (UID: "e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545619 33141 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545644 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth podName:e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045637873 +0000 UTC m=+8.915531066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth") pod "router-default-79f8cd6fdd-tkxj9" (UID: "e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545645 33141 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545658 33141 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545672 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert podName:bd53c98b-51cc-498a-ab37-f743a27bdcfb nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045666214 +0000 UTC m=+8.915559407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert") pod "controller-manager-75cd54f7f-2bg6l" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545681 33141 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.545682 master-0 kubenswrapper[33141]: E0308 03:31:34.545686 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045680045 +0000 UTC m=+8.915573238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545702 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config podName:bfc9ae4f-eb67-4ed1-97a1-d67e839fd601 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045697435 +0000 UTC m=+8.915590628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-68b88f8cb5-vxn59" (UID: "bfc9ae4f-eb67-4ed1-97a1-d67e839fd601") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545704 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545732 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap podName:bfc9ae4f-eb67-4ed1-97a1-d67e839fd601 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045725536 +0000 UTC m=+8.915618729 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-68b88f8cb5-vxn59" (UID: "bfc9ae4f-eb67-4ed1-97a1-d67e839fd601") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545760 33141 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545779 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config podName:b537a655-ef73-40b5-b228-95ab6cfdedf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045774167 +0000 UTC m=+8.915667360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config") pod "machine-approver-754bdc9f9d-lssws" (UID: "b537a655-ef73-40b5-b228-95ab6cfdedf2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545805 33141 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545829 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config podName:bd53c98b-51cc-498a-ab37-f743a27bdcfb nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045823308 +0000 UTC m=+8.915716691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config") pod "controller-manager-75cd54f7f-2bg6l" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545843 33141 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545863 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config podName:ae8f3a1e-689b-4107-993a-dde67f4decf2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.045858199 +0000 UTC m=+8.915751392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-5ff8674d55-lkwmx" (UID: "ae8f3a1e-689b-4107-993a-dde67f4decf2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545882 33141 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.546148 master-0 kubenswrapper[33141]: E0308 03:31:34.545928 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config podName:e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.04589414 +0000 UTC m=+8.915787333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" (UID: "e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547056 33141 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547094 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles podName:bd53c98b-51cc-498a-ab37-f743a27bdcfb nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.04708422 +0000 UTC m=+8.916977403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles") pod "controller-manager-75cd54f7f-2bg6l" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547147 33141 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547232 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config podName:16ca7ace-9608-4686-a039-a6ba6e3ab837 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.047205843 +0000 UTC m=+8.917099076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-74cc79fd76-wwmnn" (UID: "16ca7ace-9608-4686-a039-a6ba6e3ab837") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547308 33141 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547354 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls podName:7fafb070-7914-41c2-a8b2-e609a0e5bf9f nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.047340436 +0000 UTC m=+8.917233669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls") pod "machine-config-daemon-xv682" (UID: "7fafb070-7914-41c2-a8b2-e609a0e5bf9f") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547419 33141 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.548242 master-0 kubenswrapper[33141]: E0308 03:31:34.547449 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.047441519 +0000 UTC m=+8.917334712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.549727 master-0 kubenswrapper[33141]: E0308 03:31:34.549694 33141 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.549807 master-0 kubenswrapper[33141]: E0308 03:31:34.549739 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.049728896 +0000 UTC m=+8.919622079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.549807 master-0 kubenswrapper[33141]: E0308 03:31:34.549775 33141 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.549807 master-0 kubenswrapper[33141]: E0308 03:31:34.549780 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549823 33141 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549858 33141 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549801 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.049793218 +0000 UTC m=+8.919686511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549938 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config podName:7fafb070-7914-41c2-a8b2-e609a0e5bf9f nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.04989772 +0000 UTC m=+8.919790923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config") pod "machine-config-daemon-xv682" (UID: "7fafb070-7914-41c2-a8b2-e609a0e5bf9f") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549960 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca podName:bfc9ae4f-eb67-4ed1-97a1-d67e839fd601 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.049951302 +0000 UTC m=+8.919844595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca") pod "kube-state-metrics-68b88f8cb5-vxn59" (UID: "bfc9ae4f-eb67-4ed1-97a1-d67e839fd601") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.549979 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls podName:beed862c-6283-4568-aa2e-f49b31e30a3b nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.049970022 +0000 UTC m=+8.919863335 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls") pod "node-exporter-sjs7q" (UID: "beed862c-6283-4568-aa2e-f49b31e30a3b") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.550056 master-0 kubenswrapper[33141]: E0308 03:31:34.550055 33141 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550093 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca podName:bd53c98b-51cc-498a-ab37-f743a27bdcfb nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.050083755 +0000 UTC m=+8.919977048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca") pod "controller-manager-75cd54f7f-2bg6l" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550106 33141 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550113 33141 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550121 33141 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550159 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert podName:6176b631-3911-41cd-beb6-5bc2e924c3a7 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.050134636 +0000 UTC m=+8.920027829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert") pod "ingress-canary-fhncs" (UID: "6176b631-3911-41cd-beb6-5bc2e924c3a7") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550175 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.050169897 +0000 UTC m=+8.920063090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.550258 master-0 kubenswrapper[33141]: E0308 03:31:34.550187 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls podName:42b9f2d1-da5c-46b5-b131-d206fa37d436 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.050182567 +0000 UTC m=+8.920075760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls") pod "machine-config-controller-ff46b7bdf-27kjz" (UID: "42b9f2d1-da5c-46b5-b131-d206fa37d436") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.551379 master-0 kubenswrapper[33141]: E0308 03:31:34.551354 33141 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.551379 master-0 kubenswrapper[33141]: E0308 03:31:34.551378 33141 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551397 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.051387448 +0000 UTC m=+8.921280641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551412 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.051404158 +0000 UTC m=+8.921297351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551427 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551427 33141 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551455 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.051448369 +0000 UTC m=+8.921341562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.551470 master-0 kubenswrapper[33141]: E0308 03:31:34.551469 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls podName:bfc9ae4f-eb67-4ed1-97a1-d67e839fd601 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.051462519 +0000 UTC m=+8.921355712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls") pod "kube-state-metrics-68b88f8cb5-vxn59" (UID: "bfc9ae4f-eb67-4ed1-97a1-d67e839fd601") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552653 master-0 kubenswrapper[33141]: E0308 03:31:34.552615 33141 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552653 master-0 kubenswrapper[33141]: E0308 03:31:34.552649 33141 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552746 master-0 kubenswrapper[33141]: E0308 03:31:34.552631 33141 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552746 master-0 kubenswrapper[33141]: E0308 03:31:34.552684 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs podName:e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.05266816 +0000 UTC m=+8.922561453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs") pod "router-default-79f8cd6fdd-tkxj9" (UID: "e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552746 master-0 kubenswrapper[33141]: E0308 03:31:34.552717 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert podName:9d40fba7-84f0-46d7-9b49-dbba7aab20c5 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.052708631 +0000 UTC m=+8.922601824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert") pod "ovnkube-node-jq7bv" (UID: "9d40fba7-84f0-46d7-9b49-dbba7aab20c5") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.552746 master-0 kubenswrapper[33141]: E0308 03:31:34.552732 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs podName:99923acc-a1b4-4fbc-a636-f9c145856b01 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.052725851 +0000 UTC m=+8.922619054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs") pod "machine-config-server-fstmq" (UID: "99923acc-a1b4-4fbc-a636-f9c145856b01") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.553834 master-0 kubenswrapper[33141]: E0308 03:31:34.553811 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.553881 master-0 kubenswrapper[33141]: E0308 03:31:34.553850 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca podName:16ca7ace-9608-4686-a039-a6ba6e3ab837 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.053841769 +0000 UTC m=+8.923734962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca") pod "openshift-state-metrics-74cc79fd76-wwmnn" (UID: "16ca7ace-9608-4686-a039-a6ba6e3ab837") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.553930 master-0 kubenswrapper[33141]: E0308 03:31:34.553897 33141 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.553959 master-0 kubenswrapper[33141]: E0308 03:31:34.553935 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.553959 master-0 kubenswrapper[33141]: E0308 03:31:34.553947 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.053940172 +0000 UTC m=+8.923833365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.554025 master-0 kubenswrapper[33141]: E0308 03:31:34.553967 33141 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.554025 master-0 kubenswrapper[33141]: E0308 03:31:34.553984 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.053973862 +0000 UTC m=+8.923867165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.554025 master-0 kubenswrapper[33141]: E0308 03:31:34.554003 33141 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.554025 master-0 kubenswrapper[33141]: E0308 03:31:34.554004 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate podName:e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.053996303 +0000 UTC m=+8.923889496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate") pod "router-default-79f8cd6fdd-tkxj9" (UID: "e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.554135 master-0 kubenswrapper[33141]: E0308 03:31:34.554033 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.054027414 +0000 UTC m=+8.923920597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.555247 master-0 kubenswrapper[33141]: E0308 03:31:34.555219 33141 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.555316 master-0 kubenswrapper[33141]: E0308 03:31:34.555267 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.055254634 +0000 UTC m=+8.925147817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.559821 master-0 kubenswrapper[33141]: E0308 03:31:34.559786 33141 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.559921 master-0 kubenswrapper[33141]: E0308 03:31:34.559832 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls podName:e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.059822729 +0000 UTC m=+8.929715922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" (UID: "e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.560142 master-0 kubenswrapper[33141]: E0308 03:31:34.560123 33141 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.560219 master-0 kubenswrapper[33141]: E0308 03:31:34.560150 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle podName:f2057f75-159d-4416-a234-050f0fe1afc9 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.060143377 +0000 UTC m=+8.930036570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle") pod "apiserver-5bf974f84f-hzx44" (UID: "f2057f75-159d-4416-a234-050f0fe1afc9") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:34.560746 master-0 kubenswrapper[33141]: E0308 03:31:34.560714 33141 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.560746 master-0 kubenswrapper[33141]: E0308 03:31:34.560729 33141 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.560838 master-0 kubenswrapper[33141]: E0308 03:31:34.560756 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token podName:99923acc-a1b4-4fbc-a636-f9c145856b01 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.060747022 +0000 UTC m=+8.930640215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token") pod "machine-config-server-fstmq" (UID: "99923acc-a1b4-4fbc-a636-f9c145856b01") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.560838 master-0 kubenswrapper[33141]: E0308 03:31:34.560779 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls podName:8c65557b-9566-49f1-a049-fe492ca201b5 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:35.060769163 +0000 UTC m=+8.930662366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-5l4t7" (UID: "8c65557b-9566-49f1-a049-fe492ca201b5") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:34.563553 master-0 kubenswrapper[33141]: I0308 03:31:34.563525 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 03:31:34.580597 master-0 kubenswrapper[33141]: I0308 03:31:34.580548 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 03:31:34.600837 master-0 kubenswrapper[33141]: I0308 03:31:34.600751 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 03:31:34.620678 master-0 kubenswrapper[33141]: I0308 03:31:34.620614 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 03:31:34.640645 master-0 kubenswrapper[33141]: I0308 03:31:34.640607 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkrb" Mar 08 03:31:34.660842 master-0 kubenswrapper[33141]: I0308 03:31:34.660805 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-gqqgx" Mar 08 03:31:34.681539 master-0 kubenswrapper[33141]: I0308 03:31:34.681322 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 03:31:34.700651 master-0 kubenswrapper[33141]: I0308 03:31:34.700594 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-s25xz" Mar 08 03:31:34.721051 master-0 kubenswrapper[33141]: I0308 03:31:34.721004 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 03:31:34.742299 master-0 kubenswrapper[33141]: I0308 03:31:34.742135 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 03:31:34.761283 master-0 kubenswrapper[33141]: I0308 03:31:34.761125 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 03:31:34.781120 master-0 kubenswrapper[33141]: I0308 03:31:34.781073 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 03:31:34.800491 master-0 kubenswrapper[33141]: I0308 03:31:34.800442 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 03:31:34.831386 master-0 kubenswrapper[33141]: I0308 03:31:34.831317 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-h5rwm" Mar 08 03:31:34.836927 master-0 kubenswrapper[33141]: I0308 03:31:34.836864 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" Mar 08 03:31:34.839179 master-0 kubenswrapper[33141]: I0308 03:31:34.839135 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 08 03:31:34.841232 master-0 kubenswrapper[33141]: I0308 03:31:34.841198 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-9gswq" Mar 08 03:31:34.842501 master-0 kubenswrapper[33141]: I0308 03:31:34.842465 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="29daacb2c26fcf18f9f3b673ab22e9e9aa0de4d9b19b229cdf38f36ca276b550" exitCode=255 Mar 08 03:31:34.860238 master-0 kubenswrapper[33141]: I0308 03:31:34.860177 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 03:31:34.882008 master-0 kubenswrapper[33141]: I0308 03:31:34.881149 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 03:31:34.908197 master-0 kubenswrapper[33141]: I0308 03:31:34.908150 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 03:31:34.908395 master-0 kubenswrapper[33141]: E0308 03:31:34.908259 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 08 03:31:34.920451 master-0 kubenswrapper[33141]: I0308 03:31:34.920405 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 03:31:34.931846 master-0 kubenswrapper[33141]: I0308 03:31:34.931735 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:31:34.932702 master-0 kubenswrapper[33141]: I0308 03:31:34.932675 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:34.933374 master-0 kubenswrapper[33141]: I0308 03:31:34.933357 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/aadf7b67-db33-4392-81f5-1b93eef54545-iptables-alerter-script\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:34.940386 master-0 kubenswrapper[33141]: I0308 03:31:34.940357 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 03:31:34.960673 master-0 kubenswrapper[33141]: I0308 03:31:34.960510 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 03:31:34.981333 master-0 kubenswrapper[33141]: I0308 03:31:34.981270 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lf8gs" Mar 08 03:31:35.000615 master-0 kubenswrapper[33141]: I0308 03:31:35.000533 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-g676s" Mar 08 03:31:35.020950 master-0 kubenswrapper[33141]: I0308 03:31:35.020883 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 03:31:35.041183 master-0 kubenswrapper[33141]: I0308 03:31:35.041128 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 03:31:35.069558 master-0 kubenswrapper[33141]: I0308 03:31:35.069468 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 03:31:35.081758 master-0 kubenswrapper[33141]: I0308 03:31:35.081680 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 03:31:35.100580 master-0 kubenswrapper[33141]: I0308 03:31:35.100515 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-d6gwq" Mar 08 03:31:35.121884 master-0 kubenswrapper[33141]: I0308 03:31:35.121832 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 03:31:35.137450 master-0 kubenswrapper[33141]: I0308 03:31:35.137372 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.137694 master-0 kubenswrapper[33141]: I0308 03:31:35.137659 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:35.137862 master-0 kubenswrapper[33141]: I0308 03:31:35.137819 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:35.137960 master-0 kubenswrapper[33141]: I0308 03:31:35.137890 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:35.138119 master-0 kubenswrapper[33141]: I0308 03:31:35.138089 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.138182 master-0 kubenswrapper[33141]: I0308 03:31:35.138145 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.138247 master-0 kubenswrapper[33141]: I0308 03:31:35.138185 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.138306 master-0 kubenswrapper[33141]: I0308 03:31:35.138255 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:31:35.138363 master-0 kubenswrapper[33141]: I0308 03:31:35.138296 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovnkube-script-lib\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:35.138363 master-0 kubenswrapper[33141]: I0308 03:31:35.138317 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.138363 master-0 kubenswrapper[33141]: I0308 03:31:35.138357 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.138600 master-0 kubenswrapper[33141]: I0308 03:31:35.138562 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.138653 master-0 kubenswrapper[33141]: I0308 03:31:35.138615 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.138653 master-0 kubenswrapper[33141]: I0308 03:31:35.138566 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-service-ca-bundle\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.138739 master-0 kubenswrapper[33141]: I0308 03:31:35.138661 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.138739 master-0 kubenswrapper[33141]: I0308 03:31:35.138682 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-stats-auth\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.138840 master-0 kubenswrapper[33141]: I0308 03:31:35.138801 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.138884 master-0 kubenswrapper[33141]: I0308 03:31:35.138859 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.138957 master-0 kubenswrapper[33141]: I0308 03:31:35.138889 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:35.139010 master-0 kubenswrapper[33141]: I0308 03:31:35.138958 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.139111 master-0 kubenswrapper[33141]: I0308 03:31:35.139092 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-client\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.139201 master-0 kubenswrapper[33141]: I0308 03:31:35.139169 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.139289 master-0 kubenswrapper[33141]: I0308 03:31:35.139262 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:35.139340 master-0 kubenswrapper[33141]: I0308 03:31:35.139288 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:35.139340 master-0 kubenswrapper[33141]: I0308 03:31:35.139325 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:35.139428 master-0 kubenswrapper[33141]: I0308 03:31:35.139356 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.139597 master-0 kubenswrapper[33141]: I0308 03:31:35.139568 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.139711 master-0 kubenswrapper[33141]: I0308 03:31:35.139693 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:35.139768 master-0 kubenswrapper[33141]: I0308 03:31:35.139753 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:35.139820 master-0 kubenswrapper[33141]: I0308 03:31:35.139779 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:35.139820 master-0 kubenswrapper[33141]: I0308 03:31:35.139800 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:35.139939 master-0 kubenswrapper[33141]: I0308 03:31:35.139819 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.139939 master-0 kubenswrapper[33141]: I0308 03:31:35.139837 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.140106 master-0 kubenswrapper[33141]: I0308 03:31:35.140072 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c65557b-9566-49f1-a049-fe492ca201b5-config\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:35.140106 master-0 kubenswrapper[33141]: I0308 03:31:35.140093 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.140235 master-0 kubenswrapper[33141]: I0308 03:31:35.140103 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-proxy-tls\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:35.140235 master-0 kubenswrapper[33141]: I0308 03:31:35.140166 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:35.140235 master-0 kubenswrapper[33141]: I0308 03:31:35.140218 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:31:35.140399 master-0 kubenswrapper[33141]: I0308 03:31:35.140275 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.140399 master-0 kubenswrapper[33141]: I0308 03:31:35.140383 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.140503 master-0 kubenswrapper[33141]: I0308 03:31:35.140437 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:35.140503 master-0 kubenswrapper[33141]: I0308 03:31:35.140476 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:35.140503 master-0 kubenswrapper[33141]: I0308 03:31:35.140487 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/42b9f2d1-da5c-46b5-b131-d206fa37d436-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:35.140707 master-0 kubenswrapper[33141]: I0308 03:31:35.140550 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.140707 master-0 kubenswrapper[33141]: I0308 03:31:35.140647 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:35.140819 master-0 kubenswrapper[33141]: I0308 03:31:35.140705 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.140819 master-0 kubenswrapper[33141]: I0308 03:31:35.140739 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.140986 master-0 kubenswrapper[33141]: I0308 03:31:35.140943 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-mcd-auth-proxy-config\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:35.141085 master-0 kubenswrapper[33141]: I0308 03:31:35.141001 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.141085 master-0 kubenswrapper[33141]: I0308 03:31:35.141043 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:35.141085 master-0 kubenswrapper[33141]: I0308 03:31:35.141013 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-metrics-certs\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.141257 master-0 kubenswrapper[33141]: I0308 03:31:35.141089 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.141257 master-0 kubenswrapper[33141]: I0308 03:31:35.141152 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.141257 master-0 kubenswrapper[33141]: I0308 03:31:35.141190 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:35.141423 master-0 kubenswrapper[33141]: I0308 03:31:35.141245 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-certs\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:35.141423 master-0 kubenswrapper[33141]: I0308 03:31:35.141266 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.141423 master-0 kubenswrapper[33141]: I0308 03:31:35.141367 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-ovn-node-metrics-cert\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:35.141583 master-0 kubenswrapper[33141]: I0308 03:31:35.141459 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.141583 master-0 kubenswrapper[33141]: I0308 03:31:35.141497 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-default-certificate\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:35.141583 master-0 kubenswrapper[33141]: I0308 03:31:35.141507 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:35.141730 master-0 kubenswrapper[33141]: I0308 03:31:35.141633 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:35.141730 master-0 kubenswrapper[33141]: I0308 03:31:35.141689 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.141982 master-0 kubenswrapper[33141]: I0308 03:31:35.141747 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c65557b-9566-49f1-a049-fe492ca201b5-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:35.141982 master-0 kubenswrapper[33141]: I0308 03:31:35.141820 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:35.141982 master-0 kubenswrapper[33141]: I0308 03:31:35.141869 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:35.141982 master-0 kubenswrapper[33141]: I0308 03:31:35.141896 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.141982 master-0 kubenswrapper[33141]: I0308 03:31:35.141952 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:35.142283 master-0 kubenswrapper[33141]: I0308 03:31:35.142104 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:35.142283 master-0 kubenswrapper[33141]: I0308 03:31:35.142155 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:35.142283 master-0 kubenswrapper[33141]: I0308 03:31:35.142214 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-trusted-ca-bundle\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.142448 master-0 kubenswrapper[33141]: I0308 03:31:35.142346 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:35.142448 master-0 kubenswrapper[33141]: I0308 03:31:35.142415 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.143135 master-0 kubenswrapper[33141]: I0308 03:31:35.143078 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:35.143135 master-0 kubenswrapper[33141]: I0308 03:31:35.143099 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/42b9f2d1-da5c-46b5-b131-d206fa37d436-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:35.144018 master-0 kubenswrapper[33141]: I0308 03:31:35.143984 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 03:31:35.150709 master-0 kubenswrapper[33141]: I0308 03:31:35.150657 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-serving-cert\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.160373 master-0 kubenswrapper[33141]: I0308 03:31:35.160319 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 03:31:35.160712 master-0 kubenswrapper[33141]: I0308 03:31:35.160594 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6176b631-3911-41cd-beb6-5bc2e924c3a7-cert\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:31:35.180498 master-0 kubenswrapper[33141]: I0308 03:31:35.180424 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 03:31:35.181643 master-0 kubenswrapper[33141]: I0308 03:31:35.181586 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2057f75-159d-4416-a234-050f0fe1afc9-encryption-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.201530 master-0 kubenswrapper[33141]: I0308 03:31:35.201407 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 03:31:35.211280 master-0 kubenswrapper[33141]: I0308 03:31:35.211239 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-audit\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.221051 master-0 kubenswrapper[33141]: I0308 03:31:35.220990 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 03:31:35.231701 master-0 kubenswrapper[33141]: I0308 03:31:35.231661 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-config\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.240314 master-0 kubenswrapper[33141]: I0308 03:31:35.240280 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 03:31:35.251133 master-0 kubenswrapper[33141]: I0308 03:31:35.251087 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-etcd-serving-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.261337 master-0 kubenswrapper[33141]: I0308 03:31:35.261304 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 03:31:35.280793 master-0 kubenswrapper[33141]: I0308 03:31:35.280761 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 03:31:35.283011 master-0 kubenswrapper[33141]: I0308 03:31:35.282982 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f2057f75-159d-4416-a234-050f0fe1afc9-image-import-ca\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:35.300432 master-0 kubenswrapper[33141]: I0308 03:31:35.300378 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 03:31:35.321399 master-0 kubenswrapper[33141]: I0308 03:31:35.321336 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 03:31:35.342462 master-0 kubenswrapper[33141]: I0308 03:31:35.342415 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 03:31:35.353212 master-0 kubenswrapper[33141]: I0308 03:31:35.353164 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-dfmh2\" (UID: \"8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:31:35.380867 master-0 kubenswrapper[33141]: I0308 03:31:35.380785 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 03:31:35.390165 master-0 kubenswrapper[33141]: I0308 03:31:35.390113 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.399058 master-0 kubenswrapper[33141]: I0308 03:31:35.399020 33141 request.go:700] Waited for 2.024011015s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0 Mar 08 03:31:35.400797 master-0 kubenswrapper[33141]: I0308 03:31:35.400751 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 03:31:35.409462 master-0 kubenswrapper[33141]: I0308 03:31:35.409363 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.421768 master-0 kubenswrapper[33141]: I0308 03:31:35.421721 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-qnnnr" Mar 08 03:31:35.441482 master-0 kubenswrapper[33141]: I0308 03:31:35.441422 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 03:31:35.441902 master-0 kubenswrapper[33141]: I0308 03:31:35.441833 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:35.461690 master-0 kubenswrapper[33141]: I0308 03:31:35.461592 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:31:35.481003 master-0 kubenswrapper[33141]: I0308 03:31:35.480946 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:31:35.500866 master-0 kubenswrapper[33141]: I0308 03:31:35.500826 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 03:31:35.504391 master-0 kubenswrapper[33141]: I0308 03:31:35.504342 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/99923acc-a1b4-4fbc-a636-f9c145856b01-node-bootstrap-token\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:35.520271 master-0 kubenswrapper[33141]: I0308 03:31:35.520226 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 03:31:35.523520 master-0 kubenswrapper[33141]: I0308 03:31:35.523464 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.541200 master-0 kubenswrapper[33141]: I0308 03:31:35.541151 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vbs7r" Mar 08 03:31:35.560725 master-0 kubenswrapper[33141]: I0308 03:31:35.560677 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 03:31:35.568311 master-0 kubenswrapper[33141]: I0308 03:31:35.568279 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b537a655-ef73-40b5-b228-95ab6cfdedf2-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.580590 master-0 kubenswrapper[33141]: I0308 03:31:35.580540 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 03:31:35.600630 master-0 kubenswrapper[33141]: I0308 03:31:35.600574 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 03:31:35.620417 master-0 kubenswrapper[33141]: I0308 03:31:35.620361 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 03:31:35.632431 master-0 kubenswrapper[33141]: I0308 03:31:35.632368 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b537a655-ef73-40b5-b228-95ab6cfdedf2-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:35.641159 master-0 kubenswrapper[33141]: I0308 03:31:35.640995 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-pz6cl" Mar 08 03:31:35.661118 master-0 kubenswrapper[33141]: I0308 03:31:35.661063 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h4sjt" Mar 08 03:31:35.681363 master-0 kubenswrapper[33141]: I0308 03:31:35.681298 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 03:31:35.689158 master-0 kubenswrapper[33141]: I0308 03:31:35.689098 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-webhook-certs\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:31:35.710714 master-0 kubenswrapper[33141]: I0308 03:31:35.710678 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:31:35.720286 master-0 kubenswrapper[33141]: I0308 03:31:35.720206 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.720918 master-0 kubenswrapper[33141]: I0308 03:31:35.720877 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:31:35.730802 master-0 kubenswrapper[33141]: I0308 03:31:35.730750 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.741548 master-0 kubenswrapper[33141]: I0308 03:31:35.741509 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:31:35.761577 master-0 kubenswrapper[33141]: I0308 03:31:35.761545 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:31:35.769587 master-0 kubenswrapper[33141]: I0308 03:31:35.769544 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.782127 master-0 kubenswrapper[33141]: I0308 03:31:35.782068 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 03:31:35.790789 master-0 kubenswrapper[33141]: I0308 03:31:35.790732 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.800753 master-0 kubenswrapper[33141]: I0308 03:31:35.800711 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:31:35.821966 master-0 kubenswrapper[33141]: I0308 03:31:35.821886 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-rsc8q" Mar 08 03:31:35.842255 master-0 kubenswrapper[33141]: I0308 03:31:35.842204 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:31:35.849640 master-0 kubenswrapper[33141]: I0308 03:31:35.849595 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:35.860298 master-0 kubenswrapper[33141]: I0308 03:31:35.860255 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 03:31:35.869957 master-0 kubenswrapper[33141]: I0308 03:31:35.869897 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.881438 master-0 kubenswrapper[33141]: I0308 03:31:35.881391 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 03:31:35.889463 master-0 kubenswrapper[33141]: I0308 03:31:35.889407 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.900438 master-0 kubenswrapper[33141]: I0308 03:31:35.900405 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 03:31:35.901676 master-0 kubenswrapper[33141]: I0308 03:31:35.901634 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/16ca7ace-9608-4686-a039-a6ba6e3ab837-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:35.903012 master-0 kubenswrapper[33141]: I0308 03:31:35.902968 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae8f3a1e-689b-4107-993a-dde67f4decf2-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:35.908170 master-0 kubenswrapper[33141]: I0308 03:31:35.908136 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/beed862c-6283-4568-aa2e-f49b31e30a3b-metrics-client-ca\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:35.910550 master-0 kubenswrapper[33141]: I0308 03:31:35.910506 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:35.920237 master-0 kubenswrapper[33141]: I0308 03:31:35.920190 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-46c6c" Mar 08 03:31:35.940977 master-0 kubenswrapper[33141]: I0308 03:31:35.940896 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7gb49" Mar 08 03:31:35.960022 master-0 kubenswrapper[33141]: I0308 03:31:35.959988 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 03:31:35.970493 master-0 kubenswrapper[33141]: I0308 03:31:35.970391 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:35.980663 master-0 kubenswrapper[33141]: I0308 03:31:35.980598 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 03:31:35.990195 master-0 kubenswrapper[33141]: I0308 03:31:35.990149 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:36.001114 master-0 kubenswrapper[33141]: I0308 03:31:36.001049 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 03:31:36.003772 master-0 kubenswrapper[33141]: I0308 03:31:36.003727 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/16ca7ace-9608-4686-a039-a6ba6e3ab837-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:36.020432 master-0 kubenswrapper[33141]: I0308 03:31:36.020361 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 03:31:36.023035 master-0 kubenswrapper[33141]: I0308 03:31:36.022977 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:36.041285 master-0 kubenswrapper[33141]: I0308 03:31:36.041234 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 03:31:36.050618 master-0 kubenswrapper[33141]: I0308 03:31:36.050504 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/beed862c-6283-4568-aa2e-f49b31e30a3b-node-exporter-tls\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:36.060880 master-0 kubenswrapper[33141]: I0308 03:31:36.060795 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-da0kci31im4hq" Mar 08 03:31:36.069496 master-0 kubenswrapper[33141]: I0308 03:31:36.069424 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:36.081761 master-0 kubenswrapper[33141]: I0308 03:31:36.081664 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 03:31:36.093353 master-0 kubenswrapper[33141]: I0308 03:31:36.093251 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:36.100694 master-0 kubenswrapper[33141]: I0308 03:31:36.100625 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 03:31:36.103263 master-0 kubenswrapper[33141]: I0308 03:31:36.103192 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:36.121054 master-0 kubenswrapper[33141]: I0308 03:31:36.120972 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 03:31:36.129371 master-0 kubenswrapper[33141]: I0308 03:31:36.129326 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/ae8f3a1e-689b-4107-993a-dde67f4decf2-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:36.140435 master-0 kubenswrapper[33141]: E0308 03:31:36.140306 33141 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.140540 master-0 kubenswrapper[33141]: E0308 03:31:36.140440 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.140408839 +0000 UTC m=+11.010302062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.140924 master-0 kubenswrapper[33141]: E0308 03:31:36.140887 33141 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:36.141041 master-0 kubenswrapper[33141]: E0308 03:31:36.141030 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.141011444 +0000 UTC m=+11.010904637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:36.141153 master-0 kubenswrapper[33141]: E0308 03:31:36.141139 33141 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.141285 master-0 kubenswrapper[33141]: E0308 03:31:36.141270 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.14126056 +0000 UTC m=+11.011153753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.141366 master-0 kubenswrapper[33141]: I0308 03:31:36.141152 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:31:36.141530 master-0 kubenswrapper[33141]: E0308 03:31:36.141491 33141 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.141586 master-0 kubenswrapper[33141]: E0308 03:31:36.141574 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config podName:a0ee8c53-bf36-4459-a2c2-380293a09e26 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.141553287 +0000 UTC m=+11.011446520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config") pod "route-controller-manager-694774cfc9-r5gkh" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26") : failed to sync configmap cache: timed out waiting for the condition Mar 08 03:31:36.142364 master-0 kubenswrapper[33141]: E0308 03:31:36.142293 33141 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:36.142480 master-0 kubenswrapper[33141]: E0308 03:31:36.142451 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls podName:1e82d678-b5bb-4aec-9b5d-435305e8bdc2 nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.142394548 +0000 UTC m=+11.012287781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls") pod "metrics-server-6977dfbb45-dwjx9" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2") : failed to sync secret cache: timed out waiting for the condition Mar 08 03:31:36.163049 master-0 kubenswrapper[33141]: I0308 03:31:36.162860 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-d4zhc" Mar 08 03:31:36.181454 master-0 kubenswrapper[33141]: I0308 03:31:36.181389 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 03:31:36.201696 master-0 kubenswrapper[33141]: I0308 03:31:36.201638 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fvhvd" Mar 08 03:31:36.221775 master-0 kubenswrapper[33141]: I0308 03:31:36.221655 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:31:36.240272 master-0 kubenswrapper[33141]: I0308 03:31:36.240225 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:31:36.260438 master-0 kubenswrapper[33141]: I0308 03:31:36.260391 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 03:31:36.281219 master-0 kubenswrapper[33141]: I0308 03:31:36.281180 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:31:36.301438 master-0 kubenswrapper[33141]: I0308 03:31:36.301389 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-278m6" Mar 08 03:31:36.322001 master-0 kubenswrapper[33141]: I0308 03:31:36.321968 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:31:36.354095 master-0 kubenswrapper[33141]: I0308 03:31:36.354024 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhc2q\" (UniqueName: \"kubernetes.io/projected/c474b370-c291-4662-b57c-a20f77931c1b-kube-api-access-xhc2q\") pod \"network-check-source-7c67b67d47-6bd2j\" (UID: \"c474b370-c291-4662-b57c-a20f77931c1b\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-6bd2j" Mar 08 03:31:36.377457 master-0 kubenswrapper[33141]: I0308 03:31:36.377388 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vq9\" (UniqueName: \"kubernetes.io/projected/aadf7b67-db33-4392-81f5-1b93eef54545-kube-api-access-n4vq9\") pod \"iptables-alerter-fpxrc\" (UID: \"aadf7b67-db33-4392-81f5-1b93eef54545\") " pod="openshift-network-operator/iptables-alerter-fpxrc" Mar 08 03:31:36.403481 master-0 kubenswrapper[33141]: I0308 03:31:36.403415 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjkw\" (UniqueName: \"kubernetes.io/projected/32a3f04f-05ea-4ee3-ac77-da375c39d104-kube-api-access-fxjkw\") pod \"redhat-marketplace-k6hg9\" (UID: \"32a3f04f-05ea-4ee3-ac77-da375c39d104\") " pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:36.412504 master-0 kubenswrapper[33141]: I0308 03:31:36.412443 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:36.419830 master-0 kubenswrapper[33141]: I0308 03:31:36.419789 33141 request.go:700] Waited for 3.001558957s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Mar 08 03:31:36.444293 master-0 kubenswrapper[33141]: I0308 03:31:36.444233 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4kt5\" (UniqueName: \"kubernetes.io/projected/d82cf0db-0891-482d-856b-1675843042dd-kube-api-access-g4kt5\") pod \"cluster-image-registry-operator-86d6d77c7c-brfnq\" (UID: \"d82cf0db-0891-482d-856b-1675843042dd\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-brfnq" Mar 08 03:31:36.464736 master-0 kubenswrapper[33141]: I0308 03:31:36.463756 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knc57\" (UniqueName: \"kubernetes.io/projected/45212ce7-5f95-402e-93c4-83bac844f77d-kube-api-access-knc57\") pod \"cluster-baremetal-operator-5cdb4c5598-qgg4b\" (UID: \"45212ce7-5f95-402e-93c4-83bac844f77d\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-qgg4b" Mar 08 03:31:36.481210 master-0 kubenswrapper[33141]: I0308 03:31:36.481089 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mbg2\" (UniqueName: \"kubernetes.io/projected/c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6-kube-api-access-2mbg2\") pod \"control-plane-machine-set-operator-6686554ddc-zljww\" (UID: \"c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-zljww" Mar 08 03:31:36.496328 master-0 kubenswrapper[33141]: I0308 03:31:36.496267 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xrfv\" (UniqueName: \"kubernetes.io/projected/89fc77c9-b444-4828-8a35-c63ea9335245-kube-api-access-6xrfv\") pod \"network-operator-7c649bf6d4-wxrfp\" (UID: \"89fc77c9-b444-4828-8a35-c63ea9335245\") " pod="openshift-network-operator/network-operator-7c649bf6d4-wxrfp" Mar 08 03:31:36.517186 master-0 kubenswrapper[33141]: I0308 03:31:36.517132 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2lp\" (UniqueName: \"kubernetes.io/projected/1fa64f1b-9f10-488b-8f94-1600774062c4-kube-api-access-8k2lp\") pod \"service-ca-operator-69b6fc6b88-vjmf6\" (UID: \"1fa64f1b-9f10-488b-8f94-1600774062c4\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-vjmf6" Mar 08 03:31:36.538342 master-0 kubenswrapper[33141]: I0308 03:31:36.538276 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89prb\" (UniqueName: \"kubernetes.io/projected/c6e4afd0-fbcd-49c7-9132-b54c9c28b74b-kube-api-access-89prb\") pod \"etcd-operator-5884b9cd56-dn4ll\" (UID: \"c6e4afd0-fbcd-49c7-9132-b54c9c28b74b\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-dn4ll" Mar 08 03:31:36.560879 master-0 kubenswrapper[33141]: I0308 03:31:36.560816 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qkq\" (UniqueName: \"kubernetes.io/projected/efd90b06-2733-4086-8d70-b9aed3f7c5fa-kube-api-access-w5qkq\") pod \"certified-operators-r97mb\" (UID: \"efd90b06-2733-4086-8d70-b9aed3f7c5fa\") " pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:36.578343 master-0 kubenswrapper[33141]: I0308 03:31:36.578285 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnvtg\" (UniqueName: \"kubernetes.io/projected/0722d9c3-77b8-4770-9171-d4aeba4b0cc7-kube-api-access-vnvtg\") pod \"openshift-controller-manager-operator-8565d84698-h7lpf\" (UID: \"0722d9c3-77b8-4770-9171-d4aeba4b0cc7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-h7lpf" Mar 08 03:31:36.604163 master-0 kubenswrapper[33141]: I0308 03:31:36.604104 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7flfl\" (UniqueName: \"kubernetes.io/projected/2a506cf6-bc39-4089-9caa-4c14c4d15c11-kube-api-access-7flfl\") pod \"openshift-apiserver-operator-799b6db4d7-gstfr\" (UID: \"2a506cf6-bc39-4089-9caa-4c14c4d15c11\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-gstfr" Mar 08 03:31:36.627588 master-0 kubenswrapper[33141]: I0308 03:31:36.627532 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tk7\" (UniqueName: \"kubernetes.io/projected/d5eee869-c27f-4534-bbce-d954c42b36a3-kube-api-access-l2tk7\") pod \"multus-additional-cni-plugins-c8gc6\" (UID: \"d5eee869-c27f-4534-bbce-d954c42b36a3\") " pod="openshift-multus/multus-additional-cni-plugins-c8gc6" Mar 08 03:31:36.652738 master-0 kubenswrapper[33141]: I0308 03:31:36.652691 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdzj9\" (UniqueName: \"kubernetes.io/projected/7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6-kube-api-access-bdzj9\") pod \"marketplace-operator-64bf9778cb-4pgcf\" (UID: \"7b0f0192-f2ab-4d6c-bf74-2b149bdaefe6\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:36.664220 master-0 kubenswrapper[33141]: I0308 03:31:36.664174 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttwx8\" (UniqueName: \"kubernetes.io/projected/82ee54a2-5967-4da7-940c-5200d7df098d-kube-api-access-ttwx8\") pod \"redhat-operators-4h9n9\" (UID: \"82ee54a2-5967-4da7-940c-5200d7df098d\") " pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:36.682015 master-0 kubenswrapper[33141]: I0308 03:31:36.678579 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c72dm\" (UniqueName: \"kubernetes.io/projected/7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b-kube-api-access-c72dm\") pod \"catalogd-controller-manager-7f8b8b6f4c-rjwdp\" (UID: \"7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:36.703718 master-0 kubenswrapper[33141]: I0308 03:31:36.699358 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzgg5\" (UniqueName: \"kubernetes.io/projected/bfc9ae4f-eb67-4ed1-97a1-d67e839fd601-kube-api-access-nzgg5\") pod \"kube-state-metrics-68b88f8cb5-vxn59\" (UID: \"bfc9ae4f-eb67-4ed1-97a1-d67e839fd601\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-vxn59" Mar 08 03:31:36.717744 master-0 kubenswrapper[33141]: I0308 03:31:36.714895 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqrn6\" (UniqueName: \"kubernetes.io/projected/e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff-kube-api-access-qqrn6\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc\" (UID: \"e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc" Mar 08 03:31:36.737380 master-0 kubenswrapper[33141]: I0308 03:31:36.737273 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tj8l\" (UniqueName: \"kubernetes.io/projected/3c336192-80ee-4d53-a4ec-710cba95fac6-kube-api-access-6tj8l\") pod \"migrator-57ccdf9b5-rrfg6\" (UID: \"3c336192-80ee-4d53-a4ec-710cba95fac6\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-rrfg6" Mar 08 03:31:36.753463 master-0 kubenswrapper[33141]: I0308 03:31:36.753414 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cgc\" (UniqueName: \"kubernetes.io/projected/16ca7ace-9608-4686-a039-a6ba6e3ab837-kube-api-access-w8cgc\") pod \"openshift-state-metrics-74cc79fd76-wwmnn\" (UID: \"16ca7ace-9608-4686-a039-a6ba6e3ab837\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-wwmnn" Mar 08 03:31:36.771712 master-0 kubenswrapper[33141]: I0308 03:31:36.771655 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ct9j\" (UniqueName: \"kubernetes.io/projected/4fd323ae-11bf-4207-bdce-4d51a9c19dc3-kube-api-access-2ct9j\") pod \"network-node-identity-ppdzb\" (UID: \"4fd323ae-11bf-4207-bdce-4d51a9c19dc3\") " pod="openshift-network-node-identity/network-node-identity-ppdzb" Mar 08 03:31:36.791360 master-0 kubenswrapper[33141]: I0308 03:31:36.791323 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:36.811401 master-0 kubenswrapper[33141]: I0308 03:31:36.811296 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2a53f3b-7e22-47eb-9f28-da3441b3662f-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-mhw86\" (UID: \"d2a53f3b-7e22-47eb-9f28-da3441b3662f\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-mhw86" Mar 08 03:31:36.830851 master-0 kubenswrapper[33141]: I0308 03:31:36.830742 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rtt8\" (UniqueName: \"kubernetes.io/projected/7fafb070-7914-41c2-a8b2-e609a0e5bf9f-kube-api-access-4rtt8\") pod \"machine-config-daemon-xv682\" (UID: \"7fafb070-7914-41c2-a8b2-e609a0e5bf9f\") " pod="openshift-machine-config-operator/machine-config-daemon-xv682" Mar 08 03:31:36.852480 master-0 kubenswrapper[33141]: I0308 03:31:36.852430 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdpq\" (UniqueName: \"kubernetes.io/projected/99923acc-a1b4-4fbc-a636-f9c145856b01-kube-api-access-tfdpq\") pod \"machine-config-server-fstmq\" (UID: \"99923acc-a1b4-4fbc-a636-f9c145856b01\") " pod="openshift-machine-config-operator/machine-config-server-fstmq" Mar 08 03:31:36.872377 master-0 kubenswrapper[33141]: I0308 03:31:36.872325 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:36.892229 master-0 kubenswrapper[33141]: I0308 03:31:36.892125 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"controller-manager-75cd54f7f-2bg6l\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:36.922413 master-0 kubenswrapper[33141]: I0308 03:31:36.922336 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxn4\" (UniqueName: \"kubernetes.io/projected/ed56c17f-7e15-4776-80a6-3ef091307e89-kube-api-access-4kxn4\") pod \"cluster-monitoring-operator-674cbfbd9d-hzlxx\" (UID: \"ed56c17f-7e15-4776-80a6-3ef091307e89\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-hzlxx" Mar 08 03:31:36.932280 master-0 kubenswrapper[33141]: I0308 03:31:36.932216 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2ng6\" (UniqueName: \"kubernetes.io/projected/0e59f2e1-7fbc-43b1-bc81-7ca5f058d774-kube-api-access-w2ng6\") pod \"network-check-target-4lx8s\" (UID: \"0e59f2e1-7fbc-43b1-bc81-7ca5f058d774\") " pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:31:36.951343 master-0 kubenswrapper[33141]: I0308 03:31:36.951282 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrq96\" (UniqueName: \"kubernetes.io/projected/f520fbf8-9403-46bc-9381-226a3a1ed1c7-kube-api-access-hrq96\") pod \"node-resolver-mps4n\" (UID: \"f520fbf8-9403-46bc-9381-226a3a1ed1c7\") " pod="openshift-dns/node-resolver-mps4n" Mar 08 03:31:36.971115 master-0 kubenswrapper[33141]: I0308 03:31:36.971059 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zrr\" (UniqueName: \"kubernetes.io/projected/beed862c-6283-4568-aa2e-f49b31e30a3b-kube-api-access-22zrr\") pod \"node-exporter-sjs7q\" (UID: \"beed862c-6283-4568-aa2e-f49b31e30a3b\") " pod="openshift-monitoring/node-exporter-sjs7q" Mar 08 03:31:36.991799 master-0 kubenswrapper[33141]: I0308 03:31:36.991679 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr9bw\" (UniqueName: \"kubernetes.io/projected/399c5025-da66-4c52-8e68-ea6c996d9cc8-kube-api-access-vr9bw\") pod \"operator-controller-controller-manager-6598bfb6c4-c74s2\" (UID: \"399c5025-da66-4c52-8e68-ea6c996d9cc8\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:37.020808 master-0 kubenswrapper[33141]: I0308 03:31:37.020743 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q425\" (UniqueName: \"kubernetes.io/projected/631b3a8e-43e0-4818-b6e1-bd61ac531ab6-kube-api-access-6q425\") pod \"ovnkube-control-plane-66b55d57d-gvgch\" (UID: \"631b3a8e-43e0-4818-b6e1-bd61ac531ab6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-gvgch" Mar 08 03:31:37.035480 master-0 kubenswrapper[33141]: I0308 03:31:37.035403 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fw25\" (UniqueName: \"kubernetes.io/projected/8c65557b-9566-49f1-a049-fe492ca201b5-kube-api-access-5fw25\") pod \"machine-api-operator-84bf6db4f9-5l4t7\" (UID: \"8c65557b-9566-49f1-a049-fe492ca201b5\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-5l4t7" Mar 08 03:31:37.053051 master-0 kubenswrapper[33141]: I0308 03:31:37.052999 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a058138-8039-4841-821b-7ee5bb8648e4-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-zcr8w\" (UID: \"5a058138-8039-4841-821b-7ee5bb8648e4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-zcr8w" Mar 08 03:31:37.082617 master-0 kubenswrapper[33141]: I0308 03:31:37.082563 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q68p\" (UniqueName: \"kubernetes.io/projected/f8711b9f-3d18-4b8d-a263-2c9af9dc68a6-kube-api-access-7q68p\") pod \"package-server-manager-854648ff6d-8qznw\" (UID: \"f8711b9f-3d18-4b8d-a263-2c9af9dc68a6\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:37.091720 master-0 kubenswrapper[33141]: I0308 03:31:37.091671 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gf5\" (UniqueName: \"kubernetes.io/projected/3a2a141d-a4c3-4b6c-a90b-d184f61a14dd-kube-api-access-h4gf5\") pod \"apiserver-7b545788fb-82rjl\" (UID: \"3a2a141d-a4c3-4b6c-a90b-d184f61a14dd\") " pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:37.115678 master-0 kubenswrapper[33141]: I0308 03:31:37.115626 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgvcz\" (UniqueName: \"kubernetes.io/projected/5a92a557-d023-4531-b3a3-e559af0fe358-kube-api-access-vgvcz\") pod \"catalog-operator-7d9c49f57b-wsswx\" (UID: \"5a92a557-d023-4531-b3a3-e559af0fe358\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:37.136856 master-0 kubenswrapper[33141]: I0308 03:31:37.136799 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmdmd\" (UniqueName: \"kubernetes.io/projected/2728b91e-d59a-4e85-b245-0f297e9377f9-kube-api-access-zmdmd\") pod \"insights-operator-8f89dfddd-9l8dc\" (UID: \"2728b91e-d59a-4e85-b245-0f297e9377f9\") " pod="openshift-insights/insights-operator-8f89dfddd-9l8dc" Mar 08 03:31:37.164642 master-0 kubenswrapper[33141]: I0308 03:31:37.164586 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8l6s\" (UniqueName: \"kubernetes.io/projected/9b090750-b893-42fe-8def-dfb3f4253d43-kube-api-access-p8l6s\") pod \"dns-default-p6kjc\" (UID: \"9b090750-b893-42fe-8def-dfb3f4253d43\") " pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:37.177278 master-0 kubenswrapper[33141]: I0308 03:31:37.177220 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2468d2a3-ec65-4888-a86a-3f66fa311f56-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-xtwpr\" (UID: \"2468d2a3-ec65-4888-a86a-3f66fa311f56\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-xtwpr" Mar 08 03:31:37.186407 master-0 kubenswrapper[33141]: I0308 03:31:37.186322 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.186658 master-0 kubenswrapper[33141]: I0308 03:31:37.186620 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:37.186695 master-0 kubenswrapper[33141]: I0308 03:31:37.186669 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.186825 master-0 kubenswrapper[33141]: I0308 03:31:37.186791 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.186982 master-0 kubenswrapper[33141]: I0308 03:31:37.186960 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:37.187586 master-0 kubenswrapper[33141]: I0308 03:31:37.187548 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:37.187824 master-0 kubenswrapper[33141]: I0308 03:31:37.187787 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.187872 master-0 kubenswrapper[33141]: I0308 03:31:37.187833 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.187960 master-0 kubenswrapper[33141]: I0308 03:31:37.187847 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6977dfbb45-dwjx9\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:37.188010 master-0 kubenswrapper[33141]: I0308 03:31:37.187995 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"route-controller-manager-694774cfc9-r5gkh\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:37.197587 master-0 kubenswrapper[33141]: E0308 03:31:37.197558 33141 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 03:31:37.197587 master-0 kubenswrapper[33141]: E0308 03:31:37.197587 33141 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-retry-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 03:31:37.197679 master-0 kubenswrapper[33141]: E0308 03:31:37.197646 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access podName:e6716923-7f46-438f-9cc4-c0f071ca5b1a nodeName:}" failed. No retries permitted until 2026-03-08 03:31:37.697628638 +0000 UTC m=+11.567521871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "e6716923-7f46-438f-9cc4-c0f071ca5b1a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 08 03:31:37.224292 master-0 kubenswrapper[33141]: I0308 03:31:37.224241 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkckt\" (UniqueName: \"kubernetes.io/projected/42b9f2d1-da5c-46b5-b131-d206fa37d436-kube-api-access-bkckt\") pod \"machine-config-controller-ff46b7bdf-27kjz\" (UID: \"42b9f2d1-da5c-46b5-b131-d206fa37d436\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-27kjz" Mar 08 03:31:37.235989 master-0 kubenswrapper[33141]: I0308 03:31:37.235942 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6s7\" (UniqueName: \"kubernetes.io/projected/4711e21f-da6d-47ee-8722-64663e05de10-kube-api-access-ms6s7\") pod \"cluster-olm-operator-77899cf6d-7vlmt\" (UID: \"4711e21f-da6d-47ee-8722-64663e05de10\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-7vlmt" Mar 08 03:31:37.256140 master-0 kubenswrapper[33141]: I0308 03:31:37.256030 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42fg\" (UniqueName: \"kubernetes.io/projected/2ffe00fd-6834-4a5b-8b0b-b467d284f23c-kube-api-access-f42fg\") pod \"cluster-autoscaler-operator-69576476f7-jd7rl\" (UID: \"2ffe00fd-6834-4a5b-8b0b-b467d284f23c\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-jd7rl" Mar 08 03:31:37.272607 master-0 kubenswrapper[33141]: I0308 03:31:37.272560 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttqvt\" (UniqueName: \"kubernetes.io/projected/90ef7c0a-7c6f-45aa-865d-1e247110b265-kube-api-access-ttqvt\") pod \"authentication-operator-7c6989d6c4-k8xgg\" (UID: \"90ef7c0a-7c6f-45aa-865d-1e247110b265\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-k8xgg" Mar 08 03:31:37.288381 master-0 kubenswrapper[33141]: I0308 03:31:37.288295 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") pod \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\" (UID: \"e6716923-7f46-438f-9cc4-c0f071ca5b1a\") " Mar 08 03:31:37.290759 master-0 kubenswrapper[33141]: I0308 03:31:37.290723 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e6716923-7f46-438f-9cc4-c0f071ca5b1a" (UID: "e6716923-7f46-438f-9cc4-c0f071ca5b1a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:31:37.294488 master-0 kubenswrapper[33141]: I0308 03:31:37.294448 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gnng\" (UniqueName: \"kubernetes.io/projected/3d69f101-60a8-41fd-bcda-4eb654c626a2-kube-api-access-8gnng\") pod \"csi-snapshot-controller-operator-5685fbc7d-xbrdp\" (UID: \"3d69f101-60a8-41fd-bcda-4eb654c626a2\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-xbrdp" Mar 08 03:31:37.312744 master-0 kubenswrapper[33141]: I0308 03:31:37.312687 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-bound-sa-token\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:37.334281 master-0 kubenswrapper[33141]: I0308 03:31:37.334225 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkp89\" (UniqueName: \"kubernetes.io/projected/7a1b7b0d-6e00-485e-86e8-7bd047569328-kube-api-access-fkp89\") pod \"packageserver-7fcc847fc6-s2lnw\" (UID: \"7a1b7b0d-6e00-485e-86e8-7bd047569328\") " pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:37.361601 master-0 kubenswrapper[33141]: I0308 03:31:37.361531 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28tv\" (UniqueName: \"kubernetes.io/projected/27f5a0ab-3811-4c17-adc1-9ca48ae18ee1-kube-api-access-g28tv\") pod \"cluster-samples-operator-664cb58b85-fb844\" (UID: \"27f5a0ab-3811-4c17-adc1-9ca48ae18ee1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-fb844" Mar 08 03:31:37.373945 master-0 kubenswrapper[33141]: I0308 03:31:37.373870 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89e15db4-c541-4d53-878d-706fa022f970-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-rz5c8\" (UID: \"89e15db4-c541-4d53-878d-706fa022f970\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-rz5c8" Mar 08 03:31:37.390636 master-0 kubenswrapper[33141]: I0308 03:31:37.390586 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6716923-7f46-438f-9cc4-c0f071ca5b1a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:31:37.391276 master-0 kubenswrapper[33141]: I0308 03:31:37.391245 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjzs5\" (UniqueName: \"kubernetes.io/projected/965f8eef-c5af-499b-b1db-cf63072781cc-kube-api-access-mjzs5\") pod \"cluster-storage-operator-6fbfc8dc8f-vw4v4\" (UID: \"965f8eef-c5af-499b-b1db-cf63072781cc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-vw4v4" Mar 08 03:31:37.413139 master-0 kubenswrapper[33141]: I0308 03:31:37.413036 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5pgg\" (UniqueName: \"kubernetes.io/projected/103158c5-c99f-4224-bf5a-e23b1aaf9172-kube-api-access-m5pgg\") pod \"cluster-node-tuning-operator-66c7586884-c4zs4\" (UID: \"103158c5-c99f-4224-bf5a-e23b1aaf9172\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-c4zs4" Mar 08 03:31:37.432050 master-0 kubenswrapper[33141]: I0308 03:31:37.431961 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kd6j\" (UniqueName: \"kubernetes.io/projected/197afe92-5912-4e90-a477-e3abe001bbc7-kube-api-access-2kd6j\") pod \"ingress-operator-677db989d6-4bpl8\" (UID: \"197afe92-5912-4e90-a477-e3abe001bbc7\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-4bpl8" Mar 08 03:31:37.438936 master-0 kubenswrapper[33141]: I0308 03:31:37.438888 33141 request.go:700] Waited for 3.887673309s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Mar 08 03:31:37.455253 master-0 kubenswrapper[33141]: I0308 03:31:37.455207 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9vkx\" (UniqueName: \"kubernetes.io/projected/f2057f75-159d-4416-a234-050f0fe1afc9-kube-api-access-c9vkx\") pod \"apiserver-5bf974f84f-hzx44\" (UID: \"f2057f75-159d-4416-a234-050f0fe1afc9\") " pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:37.458068 master-0 kubenswrapper[33141]: E0308 03:31:37.458018 33141 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pode6716923_7f46_438f_9cc4_c0f071ca5b1a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-pode6716923_7f46_438f_9cc4_c0f071ca5b1a.slice/crio-fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982\": RecentStats: unable to find data in memory cache]" Mar 08 03:31:37.476711 master-0 kubenswrapper[33141]: I0308 03:31:37.476667 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxcml\" (UniqueName: \"kubernetes.io/projected/e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d-kube-api-access-kxcml\") pod \"router-default-79f8cd6fdd-tkxj9\" (UID: \"e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d\") " pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:37.499835 master-0 kubenswrapper[33141]: I0308 03:31:37.499737 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7h8\" (UniqueName: \"kubernetes.io/projected/a55bef81-2381-4036-b171-3dbc77e9c25d-kube-api-access-hj7h8\") pod \"multus-jzw4f\" (UID: \"a55bef81-2381-4036-b171-3dbc77e9c25d\") " pod="openshift-multus/multus-jzw4f" Mar 08 03:31:37.516070 master-0 kubenswrapper[33141]: I0308 03:31:37.516029 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwdh\" (UniqueName: \"kubernetes.io/projected/6176b631-3911-41cd-beb6-5bc2e924c3a7-kube-api-access-snwdh\") pod \"ingress-canary-fhncs\" (UID: \"6176b631-3911-41cd-beb6-5bc2e924c3a7\") " pod="openshift-ingress-canary/ingress-canary-fhncs" Mar 08 03:31:37.541430 master-0 kubenswrapper[33141]: I0308 03:31:37.541069 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl7m5\" (UniqueName: \"kubernetes.io/projected/9d40fba7-84f0-46d7-9b49-dbba7aab20c5-kube-api-access-hl7m5\") pod \"ovnkube-node-jq7bv\" (UID: \"9d40fba7-84f0-46d7-9b49-dbba7aab20c5\") " pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:37.562457 master-0 kubenswrapper[33141]: I0308 03:31:37.562402 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4t2j\" (UniqueName: \"kubernetes.io/projected/b537a655-ef73-40b5-b228-95ab6cfdedf2-kube-api-access-d4t2j\") pod \"machine-approver-754bdc9f9d-lssws\" (UID: \"b537a655-ef73-40b5-b228-95ab6cfdedf2\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-lssws" Mar 08 03:31:37.578540 master-0 kubenswrapper[33141]: I0308 03:31:37.578469 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8xq\" (UniqueName: \"kubernetes.io/projected/9fb588a9-6240-4513-8e4b-248eb43d3f06-kube-api-access-5d8xq\") pod \"csi-snapshot-controller-7577d6f48-kfmd9\" (UID: \"9fb588a9-6240-4513-8e4b-248eb43d3f06\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" Mar 08 03:31:37.599073 master-0 kubenswrapper[33141]: I0308 03:31:37.598996 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gcw\" (UniqueName: \"kubernetes.io/projected/38287d1a-b784-4ce9-9650-949d92469519-kube-api-access-f4gcw\") pod \"cloud-credential-operator-55d85b7b47-9hjss\" (UID: \"38287d1a-b784-4ce9-9650-949d92469519\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-9hjss" Mar 08 03:31:37.613765 master-0 kubenswrapper[33141]: I0308 03:31:37.613705 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstv2\" (UniqueName: \"kubernetes.io/projected/d68278f6-59d5-4bbf-b969-e47635ffd4cc-kube-api-access-sstv2\") pod \"olm-operator-d64cfc9db-t659n\" (UID: \"d68278f6-59d5-4bbf-b969-e47635ffd4cc\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:37.632681 master-0 kubenswrapper[33141]: I0308 03:31:37.632641 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm9tk\" (UniqueName: \"kubernetes.io/projected/7af634f0-65ac-402a-acd6-a8aad11b37ab-kube-api-access-sm9tk\") pod \"service-ca-84bfdbbb7f-jnpl5\" (UID: \"7af634f0-65ac-402a-acd6-a8aad11b37ab\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-jnpl5" Mar 08 03:31:37.664514 master-0 kubenswrapper[33141]: I0308 03:31:37.664477 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njrcj\" (UniqueName: \"kubernetes.io/projected/f6ee6202-11e5-4586-ae46-075da1ad7f1a-kube-api-access-njrcj\") pod \"network-metrics-daemon-2l64n\" (UID: \"f6ee6202-11e5-4586-ae46-075da1ad7f1a\") " pod="openshift-multus/network-metrics-daemon-2l64n" Mar 08 03:31:37.693506 master-0 kubenswrapper[33141]: I0308 03:31:37.693457 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6wb\" (UniqueName: \"kubernetes.io/projected/ea474cd1-8693-4505-9d6f-863d78776d11-kube-api-access-2r6wb\") pod \"community-operators-82rfr\" (UID: \"ea474cd1-8693-4505-9d6f-863d78776d11\") " pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:37.714483 master-0 kubenswrapper[33141]: I0308 03:31:37.714446 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4tj\" (UniqueName: \"kubernetes.io/projected/5d29f16f-e26f-4b9d-a646-230316e936a8-kube-api-access-7p4tj\") pod \"tuned-qjpkx\" (UID: \"5d29f16f-e26f-4b9d-a646-230316e936a8\") " pod="openshift-cluster-node-tuning-operator/tuned-qjpkx" Mar 08 03:31:37.720086 master-0 kubenswrapper[33141]: I0308 03:31:37.720033 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wplgs\" (UniqueName: \"kubernetes.io/projected/bd1bcaff-7dbd-4559-92fc-5453993f643e-kube-api-access-wplgs\") pod \"openshift-config-operator-64488f9d78-d4wnv\" (UID: \"bd1bcaff-7dbd-4559-92fc-5453993f643e\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:37.733942 master-0 kubenswrapper[33141]: I0308 03:31:37.733886 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/ef16d7ae-66aa-45d4-b1a6-1327738a46bb-kube-api-access-mgfrv\") pod \"dns-operator-589895fbb7-9mhwc\" (UID: \"ef16d7ae-66aa-45d4-b1a6-1327738a46bb\") " pod="openshift-dns-operator/dns-operator-589895fbb7-9mhwc" Mar 08 03:31:37.751957 master-0 kubenswrapper[33141]: I0308 03:31:37.751893 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctdbq\" (UniqueName: \"kubernetes.io/projected/ae8f3a1e-689b-4107-993a-dde67f4decf2-kube-api-access-ctdbq\") pod \"prometheus-operator-5ff8674d55-lkwmx\" (UID: \"ae8f3a1e-689b-4107-993a-dde67f4decf2\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-lkwmx" Mar 08 03:31:37.775762 master-0 kubenswrapper[33141]: I0308 03:31:37.775653 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qvl4\" (UniqueName: \"kubernetes.io/projected/1d446527-f3fd-4a37-a980-7445031928d1-kube-api-access-2qvl4\") pod \"kube-storage-version-migrator-operator-7f65c457f5-7k8j7\" (UID: \"1d446527-f3fd-4a37-a980-7445031928d1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-7k8j7" Mar 08 03:31:37.796427 master-0 kubenswrapper[33141]: I0308 03:31:37.796326 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxhht\" (UniqueName: \"kubernetes.io/projected/81abc17a-8a51-44e2-a5df-5ddb394a9fa6-kube-api-access-cxhht\") pod \"machine-config-operator-fdb5c78b5-qfbvt\" (UID: \"81abc17a-8a51-44e2-a5df-5ddb394a9fa6\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-qfbvt" Mar 08 03:31:37.816055 master-0 kubenswrapper[33141]: I0308 03:31:37.815994 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29sr\" (UniqueName: \"kubernetes.io/projected/daf9e0ac-b5a3-4a3e-aa57-31b810f634ef-kube-api-access-t29sr\") pod \"multus-admission-controller-7769569c45-lxr7s\" (UID: \"daf9e0ac-b5a3-4a3e-aa57-31b810f634ef\") " pod="openshift-multus/multus-admission-controller-7769569c45-lxr7s" Mar 08 03:31:37.845241 master-0 kubenswrapper[33141]: I0308 03:31:37.845193 33141 scope.go:117] "RemoveContainer" containerID="b291f8e827490042eb3fb88b716e290e8802aa029f1abc52b08ef049c1f2620a" Mar 08 03:31:37.870157 master-0 kubenswrapper[33141]: E0308 03:31:37.870109 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:37.912098 master-0 kubenswrapper[33141]: I0308 03:31:37.912002 33141 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="5f77c8e18b751d90bc0dfe2d4e304050" killPodOptions="" Mar 08 03:31:37.912850 master-0 kubenswrapper[33141]: E0308 03:31:37.912826 33141 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.563s" Mar 08 03:31:37.913226 master-0 kubenswrapper[33141]: I0308 03:31:37.913209 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:37.913313 master-0 kubenswrapper[33141]: I0308 03:31:37.913294 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-retry-1-master-0" event={"ID":"e6716923-7f46-438f-9cc4-c0f071ca5b1a","Type":"ContainerDied","Data":"fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982"} Mar 08 03:31:37.913401 master-0 kubenswrapper[33141]: I0308 03:31:37.913384 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc3b92d08a13fa636c372e9652644c8188d8f895a9f938085de2edbe54bf982" Mar 08 03:31:37.913515 master-0 kubenswrapper[33141]: I0308 03:31:37.913472 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:31:37.913676 master-0 kubenswrapper[33141]: I0308 03:31:37.913659 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:37.913836 master-0 kubenswrapper[33141]: I0308 03:31:37.913818 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:37.913956 master-0 kubenswrapper[33141]: I0308 03:31:37.913939 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"29daacb2c26fcf18f9f3b673ab22e9e9aa0de4d9b19b229cdf38f36ca276b550"} Mar 08 03:31:37.922538 master-0 kubenswrapper[33141]: I0308 03:31:37.922448 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 08 03:31:37.922997 master-0 kubenswrapper[33141]: I0308 03:31:37.922971 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 08 03:31:37.968861 master-0 kubenswrapper[33141]: E0308 03:31:37.968803 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:31:37.990085 master-0 kubenswrapper[33141]: E0308 03:31:37.990033 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:37.990456 master-0 kubenswrapper[33141]: I0308 03:31:37.990425 33141 scope.go:117] "RemoveContainer" containerID="29daacb2c26fcf18f9f3b673ab22e9e9aa0de4d9b19b229cdf38f36ca276b550" Mar 08 03:31:38.003978 master-0 kubenswrapper[33141]: I0308 03:31:38.003940 33141 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 08 03:31:38.004102 master-0 kubenswrapper[33141]: I0308 03:31:38.004022 33141 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 08 03:31:38.022227 master-0 kubenswrapper[33141]: I0308 03:31:38.022110 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:38.022345 master-0 kubenswrapper[33141]: I0308 03:31:38.022225 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:31:38.022421 master-0 kubenswrapper[33141]: I0308 03:31:38.022371 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:38.022493 master-0 kubenswrapper[33141]: I0308 03:31:38.022448 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:38.022493 master-0 kubenswrapper[33141]: I0308 03:31:38.022488 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:31:38.022639 master-0 kubenswrapper[33141]: I0308 03:31:38.022506 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:38.022639 master-0 kubenswrapper[33141]: I0308 03:31:38.022525 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:38.022639 master-0 kubenswrapper[33141]: I0308 03:31:38.022563 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:38.022849 master-0 kubenswrapper[33141]: I0308 03:31:38.022694 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:38.023013 master-0 kubenswrapper[33141]: I0308 03:31:38.022964 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:38.023291 master-0 kubenswrapper[33141]: I0308 03:31:38.023102 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:38.023551 master-0 kubenswrapper[33141]: I0308 03:31:38.023437 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:38.023551 master-0 kubenswrapper[33141]: I0308 03:31:38.023476 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 03:31:38.023551 master-0 kubenswrapper[33141]: I0308 03:31:38.023491 33141 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="236d9cf9-abe3-4808-9165-06e61cadf867" Mar 08 03:31:38.023551 master-0 kubenswrapper[33141]: I0308 03:31:38.023522 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-4pgcf" Mar 08 03:31:38.023551 master-0 kubenswrapper[33141]: I0308 03:31:38.023554 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023626 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023683 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023708 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-rjwdp" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023724 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023735 33141 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="236d9cf9-abe3-4808-9165-06e61cadf867" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023752 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023839 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023874 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023899 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.023990 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.024018 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.024040 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-wsswx" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.024062 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-8qznw" Mar 08 03:31:38.024057 master-0 kubenswrapper[33141]: I0308 03:31:38.024086 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024109 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-4lx8s" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024137 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024165 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024197 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024224 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024271 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024306 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024352 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024390 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024420 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024448 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024472 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024499 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024512 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:38.024886 master-0 kubenswrapper[33141]: I0308 03:31:38.024543 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:38.034504 master-0 kubenswrapper[33141]: I0308 03:31:38.034361 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:38.034504 master-0 kubenswrapper[33141]: I0308 03:31:38.034438 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-c74s2" Mar 08 03:31:38.034683 master-0 kubenswrapper[33141]: I0308 03:31:38.034606 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-p6kjc" Mar 08 03:31:38.034683 master-0 kubenswrapper[33141]: I0308 03:31:38.034664 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:38.034776 master-0 kubenswrapper[33141]: I0308 03:31:38.034689 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7fcc847fc6-s2lnw" Mar 08 03:31:38.034776 master-0 kubenswrapper[33141]: I0308 03:31:38.034716 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:38.035690 master-0 kubenswrapper[33141]: I0308 03:31:38.035660 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:38.037482 master-0 kubenswrapper[33141]: I0308 03:31:38.037435 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:31:38.039179 master-0 kubenswrapper[33141]: I0308 03:31:38.039094 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-t659n" Mar 08 03:31:38.042380 master-0 kubenswrapper[33141]: I0308 03:31:38.042264 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 08 03:31:38.043761 master-0 kubenswrapper[33141]: I0308 03:31:38.043623 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-d4wnv" Mar 08 03:31:38.065606 master-0 kubenswrapper[33141]: I0308 03:31:38.062950 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:38.073232 master-0 kubenswrapper[33141]: I0308 03:31:38.073184 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:38.077782 master-0 kubenswrapper[33141]: I0308 03:31:38.077742 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:38.575533 master-0 kubenswrapper[33141]: I0308 03:31:38.575472 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-tkxj9" Mar 08 03:31:38.873080 master-0 kubenswrapper[33141]: I0308 03:31:38.872977 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/5.log" Mar 08 03:31:38.873272 master-0 kubenswrapper[33141]: I0308 03:31:38.873099 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-kfmd9" event={"ID":"9fb588a9-6240-4513-8e4b-248eb43d3f06","Type":"ContainerStarted","Data":"de2df4b5fda14412d99b06aee3e69fd91fd8d2fb14b5cf94025c58cde9d4f5e2"} Mar 08 03:31:38.875621 master-0 kubenswrapper[33141]: I0308 03:31:38.875580 33141 scope.go:117] "RemoveContainer" containerID="296632ab9853e033010913fee076e7b35b875fbd7f05c08351eaf2c0ae69f50d" Mar 08 03:31:38.877882 master-0 kubenswrapper[33141]: I0308 03:31:38.877824 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 08 03:31:38.879653 master-0 kubenswrapper[33141]: I0308 03:31:38.879598 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e"} Mar 08 03:31:38.879806 master-0 kubenswrapper[33141]: I0308 03:31:38.879775 33141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:31:38.892250 master-0 kubenswrapper[33141]: I0308 03:31:38.892190 33141 scope.go:117] "RemoveContainer" containerID="bf4fabb9c08963210bf1f0d197a394d399879939569bdcc3789dd4b487cec36f" Mar 08 03:31:38.913010 master-0 kubenswrapper[33141]: I0308 03:31:38.912967 33141 scope.go:117] "RemoveContainer" containerID="c01067259586e4e210f6ac056b5faed267ec0e7e5fd3d0ff25d2928d118c8a91" Mar 08 03:31:39.887108 master-0 kubenswrapper[33141]: I0308 03:31:39.887066 33141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:31:39.888181 master-0 kubenswrapper[33141]: I0308 03:31:39.888151 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:39.912944 master-0 kubenswrapper[33141]: I0308 03:31:39.912840 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=2.912822349 podStartE2EDuration="2.912822349s" podCreationTimestamp="2026-03-08 03:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:31:39.911049744 +0000 UTC m=+13.780942947" watchObservedRunningTime="2026-03-08 03:31:39.912822349 +0000 UTC m=+13.782715542" Mar 08 03:31:40.532096 master-0 kubenswrapper[33141]: I0308 03:31:40.532051 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:40.536041 master-0 kubenswrapper[33141]: I0308 03:31:40.536022 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:40.683030 master-0 kubenswrapper[33141]: I0308 03:31:40.682942 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=3.6829223410000003 podStartE2EDuration="3.682922341s" podCreationTimestamp="2026-03-08 03:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:31:40.6804959 +0000 UTC m=+14.550389093" watchObservedRunningTime="2026-03-08 03:31:40.682922341 +0000 UTC m=+14.552815534" Mar 08 03:31:41.254791 master-0 kubenswrapper[33141]: I0308 03:31:41.254722 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:41.259665 master-0 kubenswrapper[33141]: I0308 03:31:41.259623 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:41.902306 master-0 kubenswrapper[33141]: I0308 03:31:41.902218 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:41.906882 master-0 kubenswrapper[33141]: I0308 03:31:41.906777 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:42.078413 master-0 kubenswrapper[33141]: I0308 03:31:42.075571 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:42.081794 master-0 kubenswrapper[33141]: I0308 03:31:42.081742 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:42.249004 master-0 kubenswrapper[33141]: I0308 03:31:42.248868 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7b545788fb-82rjl" Mar 08 03:31:42.585813 master-0 kubenswrapper[33141]: I0308 03:31:42.585768 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-5bf974f84f-hzx44" Mar 08 03:31:42.911188 master-0 kubenswrapper[33141]: I0308 03:31:42.911085 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:31:44.995599 master-0 kubenswrapper[33141]: I0308 03:31:44.995547 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:31:45.000958 master-0 kubenswrapper[33141]: I0308 03:31:45.000893 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-dfmh2" Mar 08 03:31:46.740002 master-0 kubenswrapper[33141]: I0308 03:31:46.739689 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k6hg9" Mar 08 03:31:46.744595 master-0 kubenswrapper[33141]: I0308 03:31:46.744555 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r97mb" Mar 08 03:31:46.747545 master-0 kubenswrapper[33141]: I0308 03:31:46.747515 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4h9n9" Mar 08 03:31:47.905725 master-0 kubenswrapper[33141]: I0308 03:31:47.905664 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-82rfr" Mar 08 03:31:48.251496 master-0 kubenswrapper[33141]: I0308 03:31:48.251341 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:48.251727 master-0 kubenswrapper[33141]: I0308 03:31:48.251608 33141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 03:31:48.289470 master-0 kubenswrapper[33141]: I0308 03:31:48.289396 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jq7bv" Mar 08 03:31:51.828562 master-0 kubenswrapper[33141]: I0308 03:31:51.828468 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.828893 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.828950 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.829009 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.829022 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.829051 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.829068 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.829106 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.829118 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.829175 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.829188 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: E0308 03:31:51.829226 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:31:51.829228 master-0 kubenswrapper[33141]: I0308 03:31:51.829240 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829266 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829279 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829309 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829322 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829358 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829371 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829401 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829414 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829446 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829460 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829484 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829498 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829521 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829534 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829552 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829565 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: E0308 03:31:51.829596 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:31:51.829732 master-0 kubenswrapper[33141]: I0308 03:31:51.829609 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.829818 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf7d93b-6a73-4de5-b984-cde6fba07b48" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.829843 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.829865 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7152f2-d51f-4e15-8e0a-92278cbecd53" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.829900 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5e953eb-2d1d-4d67-969b-bdecc69b61f0" containerName="assisted-installer-controller" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.829982 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830017 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2e5993-e0cb-4c63-9dda-abbb60bfe42b" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830047 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c20b192-755d-46cd-ab12-2e823b92222e" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830073 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2e0194-6b50-4478-aba4-21193d2c18aa" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830089 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6716923-7f46-438f-9cc4-c0f071ca5b1a" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830113 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830135 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8d4b89-fd81-4418-9f72-c8447fad86ad" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830151 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="627f0501-8b6a-4bc7-b610-355a0661f385" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830170 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea52bbe-5b64-45c7-8f8c-81d027f133d0" containerName="installer" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830192 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 08 03:31:51.830741 master-0 kubenswrapper[33141]: I0308 03:31:51.830208 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 08 03:31:51.831369 master-0 kubenswrapper[33141]: I0308 03:31:51.830857 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:51.833195 master-0 kubenswrapper[33141]: I0308 03:31:51.833163 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 03:31:51.833510 master-0 kubenswrapper[33141]: I0308 03:31:51.833468 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-2tj6k" Mar 08 03:31:51.850702 master-0 kubenswrapper[33141]: I0308 03:31:51.850653 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 03:31:52.029681 master-0 kubenswrapper[33141]: I0308 03:31:52.029588 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.029681 master-0 kubenswrapper[33141]: I0308 03:31:52.029677 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.030202 master-0 kubenswrapper[33141]: I0308 03:31:52.029834 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.131122 master-0 kubenswrapper[33141]: I0308 03:31:52.130863 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.131122 master-0 kubenswrapper[33141]: I0308 03:31:52.131118 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.131513 master-0 kubenswrapper[33141]: I0308 03:31:52.131158 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.131513 master-0 kubenswrapper[33141]: I0308 03:31:52.131355 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.131513 master-0 kubenswrapper[33141]: I0308 03:31:52.131378 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.155599 master-0 kubenswrapper[33141]: I0308 03:31:52.155510 33141 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 03:31:52.171704 master-0 kubenswrapper[33141]: I0308 03:31:52.171607 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.466582 master-0 kubenswrapper[33141]: I0308 03:31:52.466413 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:31:52.898740 master-0 kubenswrapper[33141]: I0308 03:31:52.898677 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 08 03:31:52.970122 master-0 kubenswrapper[33141]: I0308 03:31:52.970033 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"2129802f-8b19-4eee-8ac3-1cb980b067b7","Type":"ContainerStarted","Data":"a07a1ce3b7b21b02788752f5d94b739f3f01217959cc4e943a9ae32b5bafafbe"} Mar 08 03:31:53.979407 master-0 kubenswrapper[33141]: I0308 03:31:53.979062 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"2129802f-8b19-4eee-8ac3-1cb980b067b7","Type":"ContainerStarted","Data":"bdfa69d061b532aa4500a61c6d722eb62da8a58dc2b287915aaa581ce754b8ae"} Mar 08 03:31:54.006173 master-0 kubenswrapper[33141]: I0308 03:31:54.005961 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=3.00593978 podStartE2EDuration="3.00593978s" podCreationTimestamp="2026-03-08 03:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:31:54.003783296 +0000 UTC m=+27.873676529" watchObservedRunningTime="2026-03-08 03:31:54.00593978 +0000 UTC m=+27.875832973" Mar 08 03:31:54.597897 master-0 kubenswrapper[33141]: I0308 03:31:54.597804 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:31:54.599247 master-0 kubenswrapper[33141]: I0308 03:31:54.599197 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.602254 master-0 kubenswrapper[33141]: I0308 03:31:54.602207 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sglg6" Mar 08 03:31:54.602429 master-0 kubenswrapper[33141]: I0308 03:31:54.602284 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:31:54.612003 master-0 kubenswrapper[33141]: I0308 03:31:54.611872 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:31:54.671252 master-0 kubenswrapper[33141]: I0308 03:31:54.671162 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.671252 master-0 kubenswrapper[33141]: I0308 03:31:54.671252 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.671849 master-0 kubenswrapper[33141]: I0308 03:31:54.671436 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.772948 master-0 kubenswrapper[33141]: I0308 03:31:54.772683 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.772948 master-0 kubenswrapper[33141]: I0308 03:31:54.772787 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.772948 master-0 kubenswrapper[33141]: I0308 03:31:54.772876 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.775291 master-0 kubenswrapper[33141]: I0308 03:31:54.773139 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.775291 master-0 kubenswrapper[33141]: I0308 03:31:54.773189 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.809108 master-0 kubenswrapper[33141]: I0308 03:31:54.807501 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access\") pod \"installer-4-master-0\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:54.935890 master-0 kubenswrapper[33141]: I0308 03:31:54.935729 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:31:55.391880 master-0 kubenswrapper[33141]: I0308 03:31:55.391803 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:31:55.407116 master-0 kubenswrapper[33141]: W0308 03:31:55.407012 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf17bdb20_5114_45c4_a27b_1260baba6bfa.slice/crio-f29a122d553a69e7964cbce6151ebb321d65108a71bb80806659dcb23be6c21b WatchSource:0}: Error finding container f29a122d553a69e7964cbce6151ebb321d65108a71bb80806659dcb23be6c21b: Status 404 returned error can't find the container with id f29a122d553a69e7964cbce6151ebb321d65108a71bb80806659dcb23be6c21b Mar 08 03:31:55.617693 master-0 kubenswrapper[33141]: I0308 03:31:55.617637 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:31:55.676539 master-0 kubenswrapper[33141]: I0308 03:31:55.676445 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:55.684694 master-0 kubenswrapper[33141]: I0308 03:31:55.684294 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:31:56.000748 master-0 kubenswrapper[33141]: I0308 03:31:56.000561 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f17bdb20-5114-45c4-a27b-1260baba6bfa","Type":"ContainerStarted","Data":"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad"} Mar 08 03:31:56.000748 master-0 kubenswrapper[33141]: I0308 03:31:56.000630 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f17bdb20-5114-45c4-a27b-1260baba6bfa","Type":"ContainerStarted","Data":"f29a122d553a69e7964cbce6151ebb321d65108a71bb80806659dcb23be6c21b"} Mar 08 03:31:56.068609 master-0 kubenswrapper[33141]: I0308 03:31:56.068514 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.06849318 podStartE2EDuration="2.06849318s" podCreationTimestamp="2026-03-08 03:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:31:56.067395873 +0000 UTC m=+29.937289086" watchObservedRunningTime="2026-03-08 03:31:56.06849318 +0000 UTC m=+29.938386383" Mar 08 03:32:00.026616 master-0 kubenswrapper[33141]: I0308 03:32:00.026530 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:32:00.027213 master-0 kubenswrapper[33141]: I0308 03:32:00.026953 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304" gracePeriod=5 Mar 08 03:32:05.630149 master-0 kubenswrapper[33141]: I0308 03:32:05.630104 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 08 03:32:05.630986 master-0 kubenswrapper[33141]: I0308 03:32:05.630173 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:32:05.751558 master-0 kubenswrapper[33141]: I0308 03:32:05.751508 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 03:32:05.751773 master-0 kubenswrapper[33141]: I0308 03:32:05.751617 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 03:32:05.751773 master-0 kubenswrapper[33141]: I0308 03:32:05.751681 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 03:32:05.751773 master-0 kubenswrapper[33141]: I0308 03:32:05.751707 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 03:32:05.751773 master-0 kubenswrapper[33141]: I0308 03:32:05.751751 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 08 03:32:05.752069 master-0 kubenswrapper[33141]: I0308 03:32:05.752043 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:05.752126 master-0 kubenswrapper[33141]: I0308 03:32:05.752084 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:05.752126 master-0 kubenswrapper[33141]: I0308 03:32:05.752106 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:05.752416 master-0 kubenswrapper[33141]: I0308 03:32:05.752393 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:05.757577 master-0 kubenswrapper[33141]: I0308 03:32:05.757544 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:05.854002 master-0 kubenswrapper[33141]: I0308 03:32:05.853833 33141 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:05.854002 master-0 kubenswrapper[33141]: I0308 03:32:05.853919 33141 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:05.854002 master-0 kubenswrapper[33141]: I0308 03:32:05.853942 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:05.854002 master-0 kubenswrapper[33141]: I0308 03:32:05.853959 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:05.854002 master-0 kubenswrapper[33141]: I0308 03:32:05.853977 33141 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:06.079808 master-0 kubenswrapper[33141]: I0308 03:32:06.079716 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 08 03:32:06.079808 master-0 kubenswrapper[33141]: I0308 03:32:06.079795 33141 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304" exitCode=137 Mar 08 03:32:06.080685 master-0 kubenswrapper[33141]: I0308 03:32:06.079857 33141 scope.go:117] "RemoveContainer" containerID="325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304" Mar 08 03:32:06.080685 master-0 kubenswrapper[33141]: I0308 03:32:06.079942 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:32:06.100702 master-0 kubenswrapper[33141]: I0308 03:32:06.099864 33141 scope.go:117] "RemoveContainer" containerID="325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304" Mar 08 03:32:06.100702 master-0 kubenswrapper[33141]: E0308 03:32:06.100598 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304\": container with ID starting with 325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304 not found: ID does not exist" containerID="325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304" Mar 08 03:32:06.101320 master-0 kubenswrapper[33141]: I0308 03:32:06.100641 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304"} err="failed to get container status \"325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304\": rpc error: code = NotFound desc = could not find container \"325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304\": container with ID starting with 325c06ebd1141b90c76331b637fb56d0b788e8b9804bc03c4400966f3dd29304 not found: ID does not exist" Mar 08 03:32:06.364312 master-0 kubenswrapper[33141]: I0308 03:32:06.364062 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 08 03:32:06.364711 master-0 kubenswrapper[33141]: I0308 03:32:06.364532 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 08 03:32:06.388471 master-0 kubenswrapper[33141]: I0308 03:32:06.388387 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:32:06.388471 master-0 kubenswrapper[33141]: I0308 03:32:06.388464 33141 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="acd432ef-8c03-4470-8828-1769564d53cc" Mar 08 03:32:06.392863 master-0 kubenswrapper[33141]: I0308 03:32:06.392794 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:32:06.393045 master-0 kubenswrapper[33141]: I0308 03:32:06.392860 33141 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="acd432ef-8c03-4470-8828-1769564d53cc" Mar 08 03:32:26.164739 master-0 kubenswrapper[33141]: I0308 03:32:26.164645 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:32:26.166031 master-0 kubenswrapper[33141]: I0308 03:32:26.165289 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager" containerID="cri-o://792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" gracePeriod=30 Mar 08 03:32:26.166031 master-0 kubenswrapper[33141]: I0308 03:32:26.165337 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" gracePeriod=30 Mar 08 03:32:26.166031 master-0 kubenswrapper[33141]: I0308 03:32:26.165350 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" gracePeriod=30 Mar 08 03:32:26.166031 master-0 kubenswrapper[33141]: I0308 03:32:26.165386 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="cluster-policy-controller" containerID="cri-o://c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" gracePeriod=30 Mar 08 03:32:26.169983 master-0 kubenswrapper[33141]: I0308 03:32:26.169834 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:32:26.170357 master-0 kubenswrapper[33141]: E0308 03:32:26.170314 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="cluster-policy-controller" Mar 08 03:32:26.170357 master-0 kubenswrapper[33141]: I0308 03:32:26.170349 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="cluster-policy-controller" Mar 08 03:32:26.170452 master-0 kubenswrapper[33141]: E0308 03:32:26.170381 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 03:32:26.170452 master-0 kubenswrapper[33141]: I0308 03:32:26.170394 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 03:32:26.170452 master-0 kubenswrapper[33141]: E0308 03:32:26.170422 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager" Mar 08 03:32:26.170452 master-0 kubenswrapper[33141]: I0308 03:32:26.170434 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager" Mar 08 03:32:26.170599 master-0 kubenswrapper[33141]: E0308 03:32:26.170460 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-cert-syncer" Mar 08 03:32:26.170599 master-0 kubenswrapper[33141]: I0308 03:32:26.170473 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-cert-syncer" Mar 08 03:32:26.170599 master-0 kubenswrapper[33141]: E0308 03:32:26.170503 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-recovery-controller" Mar 08 03:32:26.170599 master-0 kubenswrapper[33141]: I0308 03:32:26.170516 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-recovery-controller" Mar 08 03:32:26.170748 master-0 kubenswrapper[33141]: I0308 03:32:26.170721 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="cluster-policy-controller" Mar 08 03:32:26.170794 master-0 kubenswrapper[33141]: I0308 03:32:26.170751 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-cert-syncer" Mar 08 03:32:26.170794 master-0 kubenswrapper[33141]: I0308 03:32:26.170771 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager" Mar 08 03:32:26.170794 master-0 kubenswrapper[33141]: I0308 03:32:26.170790 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 08 03:32:26.170919 master-0 kubenswrapper[33141]: I0308 03:32:26.170819 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c635212a8e9ee60477413d34dfb3c70" containerName="kube-controller-manager-recovery-controller" Mar 08 03:32:26.357119 master-0 kubenswrapper[33141]: I0308 03:32:26.357070 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.357119 master-0 kubenswrapper[33141]: I0308 03:32:26.357128 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.359416 master-0 kubenswrapper[33141]: I0308 03:32:26.359309 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6c635212a8e9ee60477413d34dfb3c70" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:32:26.449677 master-0 kubenswrapper[33141]: I0308 03:32:26.449580 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6c635212a8e9ee60477413d34dfb3c70/kube-controller-manager-cert-syncer/0.log" Mar 08 03:32:26.450176 master-0 kubenswrapper[33141]: I0308 03:32:26.450151 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.453796 master-0 kubenswrapper[33141]: I0308 03:32:26.453763 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6c635212a8e9ee60477413d34dfb3c70" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:32:26.460935 master-0 kubenswrapper[33141]: I0308 03:32:26.460874 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.461061 master-0 kubenswrapper[33141]: I0308 03:32:26.461036 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.462253 master-0 kubenswrapper[33141]: I0308 03:32:26.462214 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.462308 master-0 kubenswrapper[33141]: I0308 03:32:26.462262 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"d80fb58c61b036bc2179d84399404132\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:26.562049 master-0 kubenswrapper[33141]: I0308 03:32:26.561975 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") pod \"6c635212a8e9ee60477413d34dfb3c70\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " Mar 08 03:32:26.562256 master-0 kubenswrapper[33141]: I0308 03:32:26.562105 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") pod \"6c635212a8e9ee60477413d34dfb3c70\" (UID: \"6c635212a8e9ee60477413d34dfb3c70\") " Mar 08 03:32:26.562292 master-0 kubenswrapper[33141]: I0308 03:32:26.562134 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "6c635212a8e9ee60477413d34dfb3c70" (UID: "6c635212a8e9ee60477413d34dfb3c70"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:26.562386 master-0 kubenswrapper[33141]: I0308 03:32:26.562349 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "6c635212a8e9ee60477413d34dfb3c70" (UID: "6c635212a8e9ee60477413d34dfb3c70"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:26.562482 master-0 kubenswrapper[33141]: I0308 03:32:26.562457 33141 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:26.562482 master-0 kubenswrapper[33141]: I0308 03:32:26.562477 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6c635212a8e9ee60477413d34dfb3c70-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:27.247360 master-0 kubenswrapper[33141]: I0308 03:32:27.247253 33141 generic.go:334] "Generic (PLEG): container finished" podID="2129802f-8b19-4eee-8ac3-1cb980b067b7" containerID="bdfa69d061b532aa4500a61c6d722eb62da8a58dc2b287915aaa581ce754b8ae" exitCode=0 Mar 08 03:32:27.247360 master-0 kubenswrapper[33141]: I0308 03:32:27.247321 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"2129802f-8b19-4eee-8ac3-1cb980b067b7","Type":"ContainerDied","Data":"bdfa69d061b532aa4500a61c6d722eb62da8a58dc2b287915aaa581ce754b8ae"} Mar 08 03:32:27.250591 master-0 kubenswrapper[33141]: I0308 03:32:27.250534 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_6c635212a8e9ee60477413d34dfb3c70/kube-controller-manager-cert-syncer/0.log" Mar 08 03:32:27.251961 master-0 kubenswrapper[33141]: I0308 03:32:27.251876 33141 generic.go:334] "Generic (PLEG): container finished" podID="6c635212a8e9ee60477413d34dfb3c70" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" exitCode=0 Mar 08 03:32:27.251961 master-0 kubenswrapper[33141]: I0308 03:32:27.251953 33141 generic.go:334] "Generic (PLEG): container finished" podID="6c635212a8e9ee60477413d34dfb3c70" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" exitCode=2 Mar 08 03:32:27.252130 master-0 kubenswrapper[33141]: I0308 03:32:27.251974 33141 generic.go:334] "Generic (PLEG): container finished" podID="6c635212a8e9ee60477413d34dfb3c70" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" exitCode=0 Mar 08 03:32:27.252130 master-0 kubenswrapper[33141]: I0308 03:32:27.251990 33141 generic.go:334] "Generic (PLEG): container finished" podID="6c635212a8e9ee60477413d34dfb3c70" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" exitCode=0 Mar 08 03:32:27.252130 master-0 kubenswrapper[33141]: I0308 03:32:27.251988 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:27.252349 master-0 kubenswrapper[33141]: I0308 03:32:27.252159 33141 scope.go:117] "RemoveContainer" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.284102 master-0 kubenswrapper[33141]: I0308 03:32:27.278272 33141 scope.go:117] "RemoveContainer" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.288461 master-0 kubenswrapper[33141]: I0308 03:32:27.288387 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6c635212a8e9ee60477413d34dfb3c70" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:32:27.299728 master-0 kubenswrapper[33141]: I0308 03:32:27.299645 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="6c635212a8e9ee60477413d34dfb3c70" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:32:27.308132 master-0 kubenswrapper[33141]: I0308 03:32:27.308063 33141 scope.go:117] "RemoveContainer" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.327561 master-0 kubenswrapper[33141]: I0308 03:32:27.327486 33141 scope.go:117] "RemoveContainer" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.346313 master-0 kubenswrapper[33141]: I0308 03:32:27.346271 33141 scope.go:117] "RemoveContainer" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.346796 master-0 kubenswrapper[33141]: E0308 03:32:27.346753 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": container with ID starting with 689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334 not found: ID does not exist" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.346868 master-0 kubenswrapper[33141]: I0308 03:32:27.346789 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} err="failed to get container status \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": rpc error: code = NotFound desc = could not find container \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": container with ID starting with 689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334 not found: ID does not exist" Mar 08 03:32:27.346868 master-0 kubenswrapper[33141]: I0308 03:32:27.346812 33141 scope.go:117] "RemoveContainer" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.347253 master-0 kubenswrapper[33141]: E0308 03:32:27.347209 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": container with ID starting with 2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843 not found: ID does not exist" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.347253 master-0 kubenswrapper[33141]: I0308 03:32:27.347244 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} err="failed to get container status \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": rpc error: code = NotFound desc = could not find container \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": container with ID starting with 2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843 not found: ID does not exist" Mar 08 03:32:27.347395 master-0 kubenswrapper[33141]: I0308 03:32:27.347263 33141 scope.go:117] "RemoveContainer" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.347657 master-0 kubenswrapper[33141]: E0308 03:32:27.347618 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": container with ID starting with c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97 not found: ID does not exist" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.347657 master-0 kubenswrapper[33141]: I0308 03:32:27.347647 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} err="failed to get container status \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": rpc error: code = NotFound desc = could not find container \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": container with ID starting with c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97 not found: ID does not exist" Mar 08 03:32:27.347788 master-0 kubenswrapper[33141]: I0308 03:32:27.347664 33141 scope.go:117] "RemoveContainer" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.347985 master-0 kubenswrapper[33141]: E0308 03:32:27.347942 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": container with ID starting with 792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51 not found: ID does not exist" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.348309 master-0 kubenswrapper[33141]: I0308 03:32:27.348011 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} err="failed to get container status \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": rpc error: code = NotFound desc = could not find container \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": container with ID starting with 792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51 not found: ID does not exist" Mar 08 03:32:27.348309 master-0 kubenswrapper[33141]: I0308 03:32:27.348300 33141 scope.go:117] "RemoveContainer" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.348780 master-0 kubenswrapper[33141]: I0308 03:32:27.348688 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} err="failed to get container status \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": rpc error: code = NotFound desc = could not find container \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": container with ID starting with 689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334 not found: ID does not exist" Mar 08 03:32:27.348780 master-0 kubenswrapper[33141]: I0308 03:32:27.348735 33141 scope.go:117] "RemoveContainer" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.349151 master-0 kubenswrapper[33141]: I0308 03:32:27.349096 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} err="failed to get container status \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": rpc error: code = NotFound desc = could not find container \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": container with ID starting with 2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843 not found: ID does not exist" Mar 08 03:32:27.349151 master-0 kubenswrapper[33141]: I0308 03:32:27.349128 33141 scope.go:117] "RemoveContainer" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.349634 master-0 kubenswrapper[33141]: I0308 03:32:27.349580 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} err="failed to get container status \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": rpc error: code = NotFound desc = could not find container \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": container with ID starting with c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97 not found: ID does not exist" Mar 08 03:32:27.349634 master-0 kubenswrapper[33141]: I0308 03:32:27.349607 33141 scope.go:117] "RemoveContainer" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.350738 master-0 kubenswrapper[33141]: I0308 03:32:27.350673 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} err="failed to get container status \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": rpc error: code = NotFound desc = could not find container \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": container with ID starting with 792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51 not found: ID does not exist" Mar 08 03:32:27.350738 master-0 kubenswrapper[33141]: I0308 03:32:27.350732 33141 scope.go:117] "RemoveContainer" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.351240 master-0 kubenswrapper[33141]: I0308 03:32:27.351191 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} err="failed to get container status \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": rpc error: code = NotFound desc = could not find container \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": container with ID starting with 689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334 not found: ID does not exist" Mar 08 03:32:27.351378 master-0 kubenswrapper[33141]: I0308 03:32:27.351256 33141 scope.go:117] "RemoveContainer" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.351637 master-0 kubenswrapper[33141]: I0308 03:32:27.351588 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} err="failed to get container status \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": rpc error: code = NotFound desc = could not find container \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": container with ID starting with 2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843 not found: ID does not exist" Mar 08 03:32:27.351637 master-0 kubenswrapper[33141]: I0308 03:32:27.351622 33141 scope.go:117] "RemoveContainer" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.351989 master-0 kubenswrapper[33141]: I0308 03:32:27.351850 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} err="failed to get container status \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": rpc error: code = NotFound desc = could not find container \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": container with ID starting with c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97 not found: ID does not exist" Mar 08 03:32:27.351989 master-0 kubenswrapper[33141]: I0308 03:32:27.351954 33141 scope.go:117] "RemoveContainer" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.352359 master-0 kubenswrapper[33141]: I0308 03:32:27.352287 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} err="failed to get container status \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": rpc error: code = NotFound desc = could not find container \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": container with ID starting with 792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51 not found: ID does not exist" Mar 08 03:32:27.352359 master-0 kubenswrapper[33141]: I0308 03:32:27.352351 33141 scope.go:117] "RemoveContainer" containerID="689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334" Mar 08 03:32:27.352670 master-0 kubenswrapper[33141]: I0308 03:32:27.352626 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334"} err="failed to get container status \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": rpc error: code = NotFound desc = could not find container \"689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334\": container with ID starting with 689547d98c54b4aa4bce9c7135ceaf55006bc8885cd685791227bba972fc6334 not found: ID does not exist" Mar 08 03:32:27.352670 master-0 kubenswrapper[33141]: I0308 03:32:27.352657 33141 scope.go:117] "RemoveContainer" containerID="2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843" Mar 08 03:32:27.352992 master-0 kubenswrapper[33141]: I0308 03:32:27.352963 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843"} err="failed to get container status \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": rpc error: code = NotFound desc = could not find container \"2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843\": container with ID starting with 2e1f4b7b4c8ff4ce27e879bd84a79128252e775b9923da9a7e6b3fe8fe642843 not found: ID does not exist" Mar 08 03:32:27.353079 master-0 kubenswrapper[33141]: I0308 03:32:27.353010 33141 scope.go:117] "RemoveContainer" containerID="c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97" Mar 08 03:32:27.353357 master-0 kubenswrapper[33141]: I0308 03:32:27.353316 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97"} err="failed to get container status \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": rpc error: code = NotFound desc = could not find container \"c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97\": container with ID starting with c5e779e0652480c855fc2c6f697215f8aca375c6e18fc8f91a02dce1d99d2c97 not found: ID does not exist" Mar 08 03:32:27.353434 master-0 kubenswrapper[33141]: I0308 03:32:27.353363 33141 scope.go:117] "RemoveContainer" containerID="792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51" Mar 08 03:32:27.353966 master-0 kubenswrapper[33141]: I0308 03:32:27.353881 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51"} err="failed to get container status \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": rpc error: code = NotFound desc = could not find container \"792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51\": container with ID starting with 792296a76e93c386f2e8b4b7bfa2f9386b19f4df38c3cb9a098ff1c153fa9c51 not found: ID does not exist" Mar 08 03:32:27.400949 master-0 kubenswrapper[33141]: I0308 03:32:27.392545 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:32:27.400949 master-0 kubenswrapper[33141]: I0308 03:32:27.392730 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="f17bdb20-5114-45c4-a27b-1260baba6bfa" containerName="installer" containerID="cri-o://35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad" gracePeriod=30 Mar 08 03:32:27.954500 master-0 kubenswrapper[33141]: I0308 03:32:27.954032 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_f17bdb20-5114-45c4-a27b-1260baba6bfa/installer/0.log" Mar 08 03:32:27.954500 master-0 kubenswrapper[33141]: I0308 03:32:27.954121 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:32:28.099510 master-0 kubenswrapper[33141]: I0308 03:32:28.099455 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock\") pod \"f17bdb20-5114-45c4-a27b-1260baba6bfa\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " Mar 08 03:32:28.099684 master-0 kubenswrapper[33141]: I0308 03:32:28.099610 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access\") pod \"f17bdb20-5114-45c4-a27b-1260baba6bfa\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " Mar 08 03:32:28.099684 master-0 kubenswrapper[33141]: I0308 03:32:28.099624 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock" (OuterVolumeSpecName: "var-lock") pod "f17bdb20-5114-45c4-a27b-1260baba6bfa" (UID: "f17bdb20-5114-45c4-a27b-1260baba6bfa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:28.099684 master-0 kubenswrapper[33141]: I0308 03:32:28.099658 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir\") pod \"f17bdb20-5114-45c4-a27b-1260baba6bfa\" (UID: \"f17bdb20-5114-45c4-a27b-1260baba6bfa\") " Mar 08 03:32:28.099837 master-0 kubenswrapper[33141]: I0308 03:32:28.099765 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f17bdb20-5114-45c4-a27b-1260baba6bfa" (UID: "f17bdb20-5114-45c4-a27b-1260baba6bfa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:28.100101 master-0 kubenswrapper[33141]: I0308 03:32:28.100071 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:28.100162 master-0 kubenswrapper[33141]: I0308 03:32:28.100102 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f17bdb20-5114-45c4-a27b-1260baba6bfa-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:28.103139 master-0 kubenswrapper[33141]: I0308 03:32:28.103106 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f17bdb20-5114-45c4-a27b-1260baba6bfa" (UID: "f17bdb20-5114-45c4-a27b-1260baba6bfa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:32:28.202019 master-0 kubenswrapper[33141]: I0308 03:32:28.201884 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f17bdb20-5114-45c4-a27b-1260baba6bfa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:28.265326 master-0 kubenswrapper[33141]: I0308 03:32:28.265228 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_f17bdb20-5114-45c4-a27b-1260baba6bfa/installer/0.log" Mar 08 03:32:28.265326 master-0 kubenswrapper[33141]: I0308 03:32:28.265316 33141 generic.go:334] "Generic (PLEG): container finished" podID="f17bdb20-5114-45c4-a27b-1260baba6bfa" containerID="35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad" exitCode=1 Mar 08 03:32:28.266451 master-0 kubenswrapper[33141]: I0308 03:32:28.265416 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f17bdb20-5114-45c4-a27b-1260baba6bfa","Type":"ContainerDied","Data":"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad"} Mar 08 03:32:28.266451 master-0 kubenswrapper[33141]: I0308 03:32:28.265455 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 08 03:32:28.266451 master-0 kubenswrapper[33141]: I0308 03:32:28.265461 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"f17bdb20-5114-45c4-a27b-1260baba6bfa","Type":"ContainerDied","Data":"f29a122d553a69e7964cbce6151ebb321d65108a71bb80806659dcb23be6c21b"} Mar 08 03:32:28.266451 master-0 kubenswrapper[33141]: I0308 03:32:28.265478 33141 scope.go:117] "RemoveContainer" containerID="35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad" Mar 08 03:32:28.293247 master-0 kubenswrapper[33141]: I0308 03:32:28.293144 33141 scope.go:117] "RemoveContainer" containerID="35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad" Mar 08 03:32:28.299023 master-0 kubenswrapper[33141]: E0308 03:32:28.298607 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad\": container with ID starting with 35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad not found: ID does not exist" containerID="35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad" Mar 08 03:32:28.299023 master-0 kubenswrapper[33141]: I0308 03:32:28.298681 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad"} err="failed to get container status \"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad\": rpc error: code = NotFound desc = could not find container \"35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad\": container with ID starting with 35dd849142695f1b0a12b95d1af188927f89e7be526df4fb812e8984733429ad not found: ID does not exist" Mar 08 03:32:28.318485 master-0 kubenswrapper[33141]: I0308 03:32:28.318418 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:32:28.327345 master-0 kubenswrapper[33141]: I0308 03:32:28.327284 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 08 03:32:28.359262 master-0 kubenswrapper[33141]: I0308 03:32:28.359186 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c635212a8e9ee60477413d34dfb3c70" path="/var/lib/kubelet/pods/6c635212a8e9ee60477413d34dfb3c70/volumes" Mar 08 03:32:28.361123 master-0 kubenswrapper[33141]: I0308 03:32:28.361084 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17bdb20-5114-45c4-a27b-1260baba6bfa" path="/var/lib/kubelet/pods/f17bdb20-5114-45c4-a27b-1260baba6bfa/volumes" Mar 08 03:32:28.753286 master-0 kubenswrapper[33141]: I0308 03:32:28.753243 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:32:28.911533 master-0 kubenswrapper[33141]: I0308 03:32:28.911474 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock\") pod \"2129802f-8b19-4eee-8ac3-1cb980b067b7\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " Mar 08 03:32:28.911717 master-0 kubenswrapper[33141]: I0308 03:32:28.911584 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access\") pod \"2129802f-8b19-4eee-8ac3-1cb980b067b7\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " Mar 08 03:32:28.911717 master-0 kubenswrapper[33141]: I0308 03:32:28.911611 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock" (OuterVolumeSpecName: "var-lock") pod "2129802f-8b19-4eee-8ac3-1cb980b067b7" (UID: "2129802f-8b19-4eee-8ac3-1cb980b067b7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:28.911787 master-0 kubenswrapper[33141]: I0308 03:32:28.911719 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir\") pod \"2129802f-8b19-4eee-8ac3-1cb980b067b7\" (UID: \"2129802f-8b19-4eee-8ac3-1cb980b067b7\") " Mar 08 03:32:28.911873 master-0 kubenswrapper[33141]: I0308 03:32:28.911846 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2129802f-8b19-4eee-8ac3-1cb980b067b7" (UID: "2129802f-8b19-4eee-8ac3-1cb980b067b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:32:28.912193 master-0 kubenswrapper[33141]: I0308 03:32:28.912162 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:28.912241 master-0 kubenswrapper[33141]: I0308 03:32:28.912197 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2129802f-8b19-4eee-8ac3-1cb980b067b7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:28.914505 master-0 kubenswrapper[33141]: I0308 03:32:28.914437 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2129802f-8b19-4eee-8ac3-1cb980b067b7" (UID: "2129802f-8b19-4eee-8ac3-1cb980b067b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:32:29.013832 master-0 kubenswrapper[33141]: I0308 03:32:29.013756 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2129802f-8b19-4eee-8ac3-1cb980b067b7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:29.278277 master-0 kubenswrapper[33141]: I0308 03:32:29.278090 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"2129802f-8b19-4eee-8ac3-1cb980b067b7","Type":"ContainerDied","Data":"a07a1ce3b7b21b02788752f5d94b739f3f01217959cc4e943a9ae32b5bafafbe"} Mar 08 03:32:29.278277 master-0 kubenswrapper[33141]: I0308 03:32:29.278136 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 08 03:32:29.278277 master-0 kubenswrapper[33141]: I0308 03:32:29.278171 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a07a1ce3b7b21b02788752f5d94b739f3f01217959cc4e943a9ae32b5bafafbe" Mar 08 03:32:31.389042 master-0 kubenswrapper[33141]: I0308 03:32:31.388946 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: E0308 03:32:31.389167 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17bdb20-5114-45c4-a27b-1260baba6bfa" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: I0308 03:32:31.389180 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17bdb20-5114-45c4-a27b-1260baba6bfa" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: E0308 03:32:31.389219 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2129802f-8b19-4eee-8ac3-1cb980b067b7" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: I0308 03:32:31.389227 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="2129802f-8b19-4eee-8ac3-1cb980b067b7" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: I0308 03:32:31.389312 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17bdb20-5114-45c4-a27b-1260baba6bfa" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: I0308 03:32:31.389347 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="2129802f-8b19-4eee-8ac3-1cb980b067b7" containerName="installer" Mar 08 03:32:31.390089 master-0 kubenswrapper[33141]: I0308 03:32:31.389693 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.392537 master-0 kubenswrapper[33141]: I0308 03:32:31.392479 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:32:31.392721 master-0 kubenswrapper[33141]: I0308 03:32:31.392588 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sglg6" Mar 08 03:32:31.405836 master-0 kubenswrapper[33141]: I0308 03:32:31.405535 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 08 03:32:31.448850 master-0 kubenswrapper[33141]: I0308 03:32:31.448768 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.449108 master-0 kubenswrapper[33141]: I0308 03:32:31.448872 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.449108 master-0 kubenswrapper[33141]: I0308 03:32:31.449020 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.550632 master-0 kubenswrapper[33141]: I0308 03:32:31.550518 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.550898 master-0 kubenswrapper[33141]: I0308 03:32:31.550650 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.550898 master-0 kubenswrapper[33141]: I0308 03:32:31.550712 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.551092 master-0 kubenswrapper[33141]: I0308 03:32:31.550930 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.551092 master-0 kubenswrapper[33141]: I0308 03:32:31.551003 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.582088 master-0 kubenswrapper[33141]: I0308 03:32:31.581003 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access\") pod \"installer-5-master-0\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:31.734615 master-0 kubenswrapper[33141]: I0308 03:32:31.734488 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:32:32.243367 master-0 kubenswrapper[33141]: I0308 03:32:32.243321 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 08 03:32:32.303017 master-0 kubenswrapper[33141]: I0308 03:32:32.302857 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0f958554-d0e0-4a2d-84e8-17e20ae7625c","Type":"ContainerStarted","Data":"96ae8ea1742c004dc67f72a928be3799103a0e75de703bed9bb0e13766811751"} Mar 08 03:32:33.316309 master-0 kubenswrapper[33141]: I0308 03:32:33.316233 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0f958554-d0e0-4a2d-84e8-17e20ae7625c","Type":"ContainerStarted","Data":"2b34b277e4a2839792fd0e13357068de7901d4501e66a718f17861b80f532b3f"} Mar 08 03:32:33.342620 master-0 kubenswrapper[33141]: I0308 03:32:33.342494 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.34246487 podStartE2EDuration="2.34246487s" podCreationTimestamp="2026-03-08 03:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:32:33.337736356 +0000 UTC m=+67.207629609" watchObservedRunningTime="2026-03-08 03:32:33.34246487 +0000 UTC m=+67.212358103" Mar 08 03:32:37.349705 master-0 kubenswrapper[33141]: I0308 03:32:37.349497 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:37.393010 master-0 kubenswrapper[33141]: I0308 03:32:37.392881 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2f7192aa-8529-4fc5-a318-8d1135ab808d" Mar 08 03:32:37.393010 master-0 kubenswrapper[33141]: I0308 03:32:37.392989 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2f7192aa-8529-4fc5-a318-8d1135ab808d" Mar 08 03:32:37.409172 master-0 kubenswrapper[33141]: I0308 03:32:37.409092 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:32:37.414724 master-0 kubenswrapper[33141]: I0308 03:32:37.414668 33141 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:37.419278 master-0 kubenswrapper[33141]: I0308 03:32:37.419209 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:32:37.429361 master-0 kubenswrapper[33141]: I0308 03:32:37.429173 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:37.432609 master-0 kubenswrapper[33141]: I0308 03:32:37.432558 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:32:38.368091 master-0 kubenswrapper[33141]: I0308 03:32:38.367747 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5"} Mar 08 03:32:38.368091 master-0 kubenswrapper[33141]: I0308 03:32:38.367796 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70"} Mar 08 03:32:38.368091 master-0 kubenswrapper[33141]: I0308 03:32:38.367806 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"efbf585c23fc1e979a8521b267e8220f735c3268158b1f137e28d2cce1acecfb"} Mar 08 03:32:38.368091 master-0 kubenswrapper[33141]: I0308 03:32:38.367815 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"8eeec666864e748a1fbee243429b1dfe74356712cfaeebca37a10f9dd544cdf1"} Mar 08 03:32:39.373393 master-0 kubenswrapper[33141]: I0308 03:32:39.373305 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07"} Mar 08 03:32:39.399891 master-0 kubenswrapper[33141]: I0308 03:32:39.399778 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.399755563 podStartE2EDuration="2.399755563s" podCreationTimestamp="2026-03-08 03:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:32:39.395363978 +0000 UTC m=+73.265257211" watchObservedRunningTime="2026-03-08 03:32:39.399755563 +0000 UTC m=+73.269648766" Mar 08 03:32:47.430568 master-0 kubenswrapper[33141]: I0308 03:32:47.430417 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.431599 master-0 kubenswrapper[33141]: I0308 03:32:47.430584 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.431599 master-0 kubenswrapper[33141]: I0308 03:32:47.430618 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.431599 master-0 kubenswrapper[33141]: I0308 03:32:47.430644 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.437859 master-0 kubenswrapper[33141]: I0308 03:32:47.437814 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.438256 master-0 kubenswrapper[33141]: I0308 03:32:47.438177 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:47.460251 master-0 kubenswrapper[33141]: I0308 03:32:47.460172 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:48.469170 master-0 kubenswrapper[33141]: I0308 03:32:48.469061 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:32:56.418633 master-0 kubenswrapper[33141]: I0308 03:32:56.418554 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:32:56.419384 master-0 kubenswrapper[33141]: I0308 03:32:56.418759 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" containerID="cri-o://510bc972f02c805726ce0e8b26c9f46e3ffb7b53590b52c60f2d8c1b5c1b2518" gracePeriod=30 Mar 08 03:32:56.421995 master-0 kubenswrapper[33141]: I0308 03:32:56.421947 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:32:56.422176 master-0 kubenswrapper[33141]: I0308 03:32:56.422139 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" containerID="cri-o://0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603" gracePeriod=30 Mar 08 03:32:56.545032 master-0 kubenswrapper[33141]: I0308 03:32:56.544992 33141 generic.go:334] "Generic (PLEG): container finished" podID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerID="510bc972f02c805726ce0e8b26c9f46e3ffb7b53590b52c60f2d8c1b5c1b2518" exitCode=0 Mar 08 03:32:56.545132 master-0 kubenswrapper[33141]: I0308 03:32:56.545042 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerDied","Data":"510bc972f02c805726ce0e8b26c9f46e3ffb7b53590b52c60f2d8c1b5c1b2518"} Mar 08 03:32:56.545132 master-0 kubenswrapper[33141]: I0308 03:32:56.545082 33141 scope.go:117] "RemoveContainer" containerID="a37cd76e25a0f8104dadf4dc40b6fbbd6e89423031b1f10fd470d329da3c1ab7" Mar 08 03:32:56.943280 master-0 kubenswrapper[33141]: I0308 03:32:56.943238 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:32:56.948059 master-0 kubenswrapper[33141]: I0308 03:32:56.948025 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:32:57.136397 master-0 kubenswrapper[33141]: I0308 03:32:57.136323 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") pod \"a0ee8c53-bf36-4459-a2c2-380293a09e26\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136456 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") pod \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136490 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") pod \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136533 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") pod \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136578 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") pod \"a0ee8c53-bf36-4459-a2c2-380293a09e26\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136615 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") pod \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " Mar 08 03:32:57.136665 master-0 kubenswrapper[33141]: I0308 03:32:57.136657 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") pod \"a0ee8c53-bf36-4459-a2c2-380293a09e26\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " Mar 08 03:32:57.136998 master-0 kubenswrapper[33141]: I0308 03:32:57.136700 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") pod \"a0ee8c53-bf36-4459-a2c2-380293a09e26\" (UID: \"a0ee8c53-bf36-4459-a2c2-380293a09e26\") " Mar 08 03:32:57.136998 master-0 kubenswrapper[33141]: I0308 03:32:57.136739 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") pod \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\" (UID: \"bd53c98b-51cc-498a-ab37-f743a27bdcfb\") " Mar 08 03:32:57.138047 master-0 kubenswrapper[33141]: I0308 03:32:57.137729 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bd53c98b-51cc-498a-ab37-f743a27bdcfb" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:32:57.138047 master-0 kubenswrapper[33141]: I0308 03:32:57.137897 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config" (OuterVolumeSpecName: "config") pod "a0ee8c53-bf36-4459-a2c2-380293a09e26" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:32:57.138047 master-0 kubenswrapper[33141]: I0308 03:32:57.137978 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0ee8c53-bf36-4459-a2c2-380293a09e26" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:32:57.138047 master-0 kubenswrapper[33141]: I0308 03:32:57.137940 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca" (OuterVolumeSpecName: "client-ca") pod "bd53c98b-51cc-498a-ab37-f743a27bdcfb" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:32:57.138697 master-0 kubenswrapper[33141]: I0308 03:32:57.138605 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config" (OuterVolumeSpecName: "config") pod "bd53c98b-51cc-498a-ab37-f743a27bdcfb" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:32:57.139888 master-0 kubenswrapper[33141]: I0308 03:32:57.139841 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bd53c98b-51cc-498a-ab37-f743a27bdcfb" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:32:57.141360 master-0 kubenswrapper[33141]: I0308 03:32:57.141301 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg" (OuterVolumeSpecName: "kube-api-access-c8krg") pod "a0ee8c53-bf36-4459-a2c2-380293a09e26" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26"). InnerVolumeSpecName "kube-api-access-c8krg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:32:57.142687 master-0 kubenswrapper[33141]: I0308 03:32:57.142638 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8" (OuterVolumeSpecName: "kube-api-access-hz7l8") pod "bd53c98b-51cc-498a-ab37-f743a27bdcfb" (UID: "bd53c98b-51cc-498a-ab37-f743a27bdcfb"). InnerVolumeSpecName "kube-api-access-hz7l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:32:57.146035 master-0 kubenswrapper[33141]: I0308 03:32:57.145990 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0ee8c53-bf36-4459-a2c2-380293a09e26" (UID: "a0ee8c53-bf36-4459-a2c2-380293a09e26"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:32:57.238089 master-0 kubenswrapper[33141]: I0308 03:32:57.238037 33141 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238089 master-0 kubenswrapper[33141]: I0308 03:32:57.238070 33141 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bd53c98b-51cc-498a-ab37-f743a27bdcfb-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238089 master-0 kubenswrapper[33141]: I0308 03:32:57.238080 33141 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238089 master-0 kubenswrapper[33141]: I0308 03:32:57.238089 33141 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ee8c53-bf36-4459-a2c2-380293a09e26-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238089 master-0 kubenswrapper[33141]: I0308 03:32:57.238099 33141 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bd53c98b-51cc-498a-ab37-f743a27bdcfb-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238534 master-0 kubenswrapper[33141]: I0308 03:32:57.238108 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8krg\" (UniqueName: \"kubernetes.io/projected/a0ee8c53-bf36-4459-a2c2-380293a09e26-kube-api-access-c8krg\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238534 master-0 kubenswrapper[33141]: I0308 03:32:57.238116 33141 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238534 master-0 kubenswrapper[33141]: I0308 03:32:57.238134 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hz7l8\" (UniqueName: \"kubernetes.io/projected/bd53c98b-51cc-498a-ab37-f743a27bdcfb-kube-api-access-hz7l8\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.238534 master-0 kubenswrapper[33141]: I0308 03:32:57.238142 33141 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0ee8c53-bf36-4459-a2c2-380293a09e26-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:32:57.556697 master-0 kubenswrapper[33141]: I0308 03:32:57.556608 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" event={"ID":"a0ee8c53-bf36-4459-a2c2-380293a09e26","Type":"ContainerDied","Data":"7a6ea17a030d90670e0e331f269af06bb55ade280ec6f510768c353e818db740"} Mar 08 03:32:57.556697 master-0 kubenswrapper[33141]: I0308 03:32:57.556658 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh" Mar 08 03:32:57.556697 master-0 kubenswrapper[33141]: I0308 03:32:57.556696 33141 scope.go:117] "RemoveContainer" containerID="510bc972f02c805726ce0e8b26c9f46e3ffb7b53590b52c60f2d8c1b5c1b2518" Mar 08 03:32:57.560302 master-0 kubenswrapper[33141]: I0308 03:32:57.559955 33141 generic.go:334] "Generic (PLEG): container finished" podID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerID="0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603" exitCode=0 Mar 08 03:32:57.560302 master-0 kubenswrapper[33141]: I0308 03:32:57.560001 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerDied","Data":"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603"} Mar 08 03:32:57.560302 master-0 kubenswrapper[33141]: I0308 03:32:57.560033 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" event={"ID":"bd53c98b-51cc-498a-ab37-f743a27bdcfb","Type":"ContainerDied","Data":"846f36ee6a71e885eba4255e43db9daaf610d513f1e85ae2a0f46bf5cfb8b1a1"} Mar 08 03:32:57.560302 master-0 kubenswrapper[33141]: I0308 03:32:57.560105 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75cd54f7f-2bg6l" Mar 08 03:32:57.587381 master-0 kubenswrapper[33141]: I0308 03:32:57.587306 33141 scope.go:117] "RemoveContainer" containerID="0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603" Mar 08 03:32:57.620345 master-0 kubenswrapper[33141]: I0308 03:32:57.618358 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:32:57.622147 master-0 kubenswrapper[33141]: I0308 03:32:57.621822 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-694774cfc9-r5gkh"] Mar 08 03:32:57.625228 master-0 kubenswrapper[33141]: I0308 03:32:57.624957 33141 scope.go:117] "RemoveContainer" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" Mar 08 03:32:57.638884 master-0 kubenswrapper[33141]: I0308 03:32:57.638801 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:32:57.646018 master-0 kubenswrapper[33141]: I0308 03:32:57.645923 33141 scope.go:117] "RemoveContainer" containerID="0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603" Mar 08 03:32:57.646614 master-0 kubenswrapper[33141]: E0308 03:32:57.646549 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603\": container with ID starting with 0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603 not found: ID does not exist" containerID="0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603" Mar 08 03:32:57.646723 master-0 kubenswrapper[33141]: I0308 03:32:57.646624 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603"} err="failed to get container status \"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603\": rpc error: code = NotFound desc = could not find container \"0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603\": container with ID starting with 0c081c5b9012641af62a91061e5a811ea320420fd3f4b7f190d5a657a4671603 not found: ID does not exist" Mar 08 03:32:57.646723 master-0 kubenswrapper[33141]: I0308 03:32:57.646657 33141 scope.go:117] "RemoveContainer" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" Mar 08 03:32:57.647143 master-0 kubenswrapper[33141]: E0308 03:32:57.647088 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6\": container with ID starting with 52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6 not found: ID does not exist" containerID="52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6" Mar 08 03:32:57.647143 master-0 kubenswrapper[33141]: I0308 03:32:57.647133 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6"} err="failed to get container status \"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6\": rpc error: code = NotFound desc = could not find container \"52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6\": container with ID starting with 52064f3ac14760c9cb11d88b1c7b45aa27ea0169c06575ac706e2935eceff0c6 not found: ID does not exist" Mar 08 03:32:57.647766 master-0 kubenswrapper[33141]: I0308 03:32:57.647693 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75cd54f7f-2bg6l"] Mar 08 03:32:58.363081 master-0 kubenswrapper[33141]: I0308 03:32:58.362978 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" path="/var/lib/kubelet/pods/a0ee8c53-bf36-4459-a2c2-380293a09e26/volumes" Mar 08 03:32:58.364179 master-0 kubenswrapper[33141]: I0308 03:32:58.364125 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" path="/var/lib/kubelet/pods/bd53c98b-51cc-498a-ab37-f743a27bdcfb/volumes" Mar 08 03:33:04.412801 master-0 kubenswrapper[33141]: I0308 03:33:04.412731 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z"] Mar 08 03:33:04.414375 master-0 kubenswrapper[33141]: E0308 03:33:04.414340 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.414808 master-0 kubenswrapper[33141]: I0308 03:33:04.414780 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.415011 master-0 kubenswrapper[33141]: E0308 03:33:04.414987 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.415162 master-0 kubenswrapper[33141]: I0308 03:33:04.415134 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.415422 master-0 kubenswrapper[33141]: E0308 03:33:04.415351 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.415599 master-0 kubenswrapper[33141]: I0308 03:33:04.415574 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.415740 master-0 kubenswrapper[33141]: E0308 03:33:04.415718 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.415860 master-0 kubenswrapper[33141]: I0308 03:33:04.415839 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.416281 master-0 kubenswrapper[33141]: I0308 03:33:04.416252 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.416446 master-0 kubenswrapper[33141]: I0308 03:33:04.416423 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.416577 master-0 kubenswrapper[33141]: I0308 03:33:04.416555 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd53c98b-51cc-498a-ab37-f743a27bdcfb" containerName="controller-manager" Mar 08 03:33:04.416719 master-0 kubenswrapper[33141]: I0308 03:33:04.416697 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0ee8c53-bf36-4459-a2c2-380293a09e26" containerName="route-controller-manager" Mar 08 03:33:04.417411 master-0 kubenswrapper[33141]: I0308 03:33:04.417381 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c"] Mar 08 03:33:04.418569 master-0 kubenswrapper[33141]: I0308 03:33:04.418529 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:04.419572 master-0 kubenswrapper[33141]: I0308 03:33:04.418702 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.426343 master-0 kubenswrapper[33141]: I0308 03:33:04.424340 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-vzsqv" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.427002 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.427645 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.428250 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.428489 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.429199 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h4sjt" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.430217 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:33:04.431949 master-0 kubenswrapper[33141]: I0308 03:33:04.431140 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 08 03:33:04.438492 master-0 kubenswrapper[33141]: I0308 03:33:04.433520 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5"] Mar 08 03:33:04.441859 master-0 kubenswrapper[33141]: I0308 03:33:04.441807 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:33:04.442957 master-0 kubenswrapper[33141]: I0308 03:33:04.442852 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2cw9v"] Mar 08 03:33:04.444766 master-0 kubenswrapper[33141]: I0308 03:33:04.444723 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.447345 master-0 kubenswrapper[33141]: I0308 03:33:04.447293 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:33:04.448769 master-0 kubenswrapper[33141]: I0308 03:33:04.448734 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fvhvd" Mar 08 03:33:04.449377 master-0 kubenswrapper[33141]: I0308 03:33:04.449345 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:33:04.450090 master-0 kubenswrapper[33141]: I0308 03:33:04.449864 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4"] Mar 08 03:33:04.450416 master-0 kubenswrapper[33141]: I0308 03:33:04.449680 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:33:04.450675 master-0 kubenswrapper[33141]: I0308 03:33:04.450630 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr"] Mar 08 03:33:04.453256 master-0 kubenswrapper[33141]: I0308 03:33:04.451450 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:33:04.453569 master-0 kubenswrapper[33141]: I0308 03:33:04.451872 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.453736 master-0 kubenswrapper[33141]: I0308 03:33:04.451900 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:33:04.454241 master-0 kubenswrapper[33141]: I0308 03:33:04.452377 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z"] Mar 08 03:33:04.454469 master-0 kubenswrapper[33141]: I0308 03:33:04.454437 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c"] Mar 08 03:33:04.454611 master-0 kubenswrapper[33141]: I0308 03:33:04.452250 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.456648 master-0 kubenswrapper[33141]: I0308 03:33:04.452446 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.460728 master-0 kubenswrapper[33141]: I0308 03:33:04.460501 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 03:33:04.460728 master-0 kubenswrapper[33141]: I0308 03:33:04.460663 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 03:33:04.460858 master-0 kubenswrapper[33141]: I0308 03:33:04.460735 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 03:33:04.461312 master-0 kubenswrapper[33141]: I0308 03:33:04.461006 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 03:33:04.462155 master-0 kubenswrapper[33141]: I0308 03:33:04.462102 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 03:33:04.462350 master-0 kubenswrapper[33141]: I0308 03:33:04.462320 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 03:33:04.462961 master-0 kubenswrapper[33141]: I0308 03:33:04.462618 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 03:33:04.462961 master-0 kubenswrapper[33141]: I0308 03:33:04.462846 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-t6pd7" Mar 08 03:33:04.463071 master-0 kubenswrapper[33141]: I0308 03:33:04.463059 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 03:33:04.463489 master-0 kubenswrapper[33141]: I0308 03:33:04.463249 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 03:33:04.463489 master-0 kubenswrapper[33141]: I0308 03:33:04.463472 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nqkx9" Mar 08 03:33:04.463659 master-0 kubenswrapper[33141]: I0308 03:33:04.463623 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4"] Mar 08 03:33:04.466024 master-0 kubenswrapper[33141]: I0308 03:33:04.465964 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 03:33:04.467273 master-0 kubenswrapper[33141]: I0308 03:33:04.467242 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 03:33:04.468814 master-0 kubenswrapper[33141]: I0308 03:33:04.468790 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2cw9v"] Mar 08 03:33:04.472303 master-0 kubenswrapper[33141]: I0308 03:33:04.472242 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr"] Mar 08 03:33:04.473260 master-0 kubenswrapper[33141]: I0308 03:33:04.473195 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 03:33:04.473766 master-0 kubenswrapper[33141]: I0308 03:33:04.473706 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 03:33:04.476864 master-0 kubenswrapper[33141]: I0308 03:33:04.476804 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5"] Mar 08 03:33:04.547688 master-0 kubenswrapper[33141]: I0308 03:33:04.547645 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-client-ca\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.547808 master-0 kubenswrapper[33141]: I0308 03:33:04.547701 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.547808 master-0 kubenswrapper[33141]: I0308 03:33:04.547736 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9c708dee-3f8e-4c03-82bd-d94fec91ac44-monitoring-plugin-cert\") pod \"monitoring-plugin-5ccd479c8c-v4t2c\" (UID: \"9c708dee-3f8e-4c03-82bd-d94fec91ac44\") " pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:04.547808 master-0 kubenswrapper[33141]: I0308 03:33:04.547760 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.547808 master-0 kubenswrapper[33141]: I0308 03:33:04.547786 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547812 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-trusted-ca\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547835 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-serving-certs-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547860 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0240fbc-0596-49fe-afb1-24cb1a10470f-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547883 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-config\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547936 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-proxy-ca-bundles\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.547988 master-0 kubenswrapper[33141]: I0308 03:33:04.547968 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-config\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548001 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j9m9\" (UniqueName: \"kubernetes.io/projected/302e483a-6d6f-4a41-b4d7-3d11898277f4-kube-api-access-5j9m9\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548031 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-config\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548056 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548089 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf28x\" (UniqueName: \"kubernetes.io/projected/456484f6-a19b-49f9-863b-f76e6f0c8c8f-kube-api-access-cf28x\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548112 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-federate-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548132 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-metrics-client-ca\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548154 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-serving-cert\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548176 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456484f6-a19b-49f9-863b-f76e6f0c8c8f-serving-cert\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.548214 master-0 kubenswrapper[33141]: I0308 03:33:04.548203 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7kj\" (UniqueName: \"kubernetes.io/projected/cb512861-502a-4b1c-87ee-8ac96377663a-kube-api-access-8m7kj\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.548559 master-0 kubenswrapper[33141]: I0308 03:33:04.548227 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f0240fbc-0596-49fe-afb1-24cb1a10470f-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.548559 master-0 kubenswrapper[33141]: I0308 03:33:04.548249 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-client-ca\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.548559 master-0 kubenswrapper[33141]: I0308 03:33:04.548278 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlmb\" (UniqueName: \"kubernetes.io/projected/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-kube-api-access-srlmb\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.548559 master-0 kubenswrapper[33141]: I0308 03:33:04.548307 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb512861-502a-4b1c-87ee-8ac96377663a-serving-cert\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.649658 master-0 kubenswrapper[33141]: I0308 03:33:04.649561 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlmb\" (UniqueName: \"kubernetes.io/projected/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-kube-api-access-srlmb\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.649995 master-0 kubenswrapper[33141]: I0308 03:33:04.649696 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb512861-502a-4b1c-87ee-8ac96377663a-serving-cert\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.649995 master-0 kubenswrapper[33141]: I0308 03:33:04.649776 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-client-ca\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.649995 master-0 kubenswrapper[33141]: I0308 03:33:04.649850 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.649995 master-0 kubenswrapper[33141]: I0308 03:33:04.649942 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9c708dee-3f8e-4c03-82bd-d94fec91ac44-monitoring-plugin-cert\") pod \"monitoring-plugin-5ccd479c8c-v4t2c\" (UID: \"9c708dee-3f8e-4c03-82bd-d94fec91ac44\") " pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.649998 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.650059 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.650121 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-trusted-ca\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.650171 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-serving-certs-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.650231 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0240fbc-0596-49fe-afb1-24cb1a10470f-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.650276 master-0 kubenswrapper[33141]: I0308 03:33:04.650275 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-config\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.651001 master-0 kubenswrapper[33141]: I0308 03:33:04.650335 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-proxy-ca-bundles\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.651001 master-0 kubenswrapper[33141]: I0308 03:33:04.650396 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-config\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.651001 master-0 kubenswrapper[33141]: I0308 03:33:04.650454 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j9m9\" (UniqueName: \"kubernetes.io/projected/302e483a-6d6f-4a41-b4d7-3d11898277f4-kube-api-access-5j9m9\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.651001 master-0 kubenswrapper[33141]: I0308 03:33:04.650517 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-config\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.652530 master-0 kubenswrapper[33141]: I0308 03:33:04.652464 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-client-ca\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.653054 master-0 kubenswrapper[33141]: I0308 03:33:04.650572 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.653424 master-0 kubenswrapper[33141]: I0308 03:33:04.653370 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf28x\" (UniqueName: \"kubernetes.io/projected/456484f6-a19b-49f9-863b-f76e6f0c8c8f-kube-api-access-cf28x\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.653709 master-0 kubenswrapper[33141]: I0308 03:33:04.653120 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-trusted-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.653709 master-0 kubenswrapper[33141]: I0308 03:33:04.653659 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-federate-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653748 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-metrics-client-ca\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653796 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-serving-cert\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653830 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456484f6-a19b-49f9-863b-f76e6f0c8c8f-serving-cert\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653869 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f0240fbc-0596-49fe-afb1-24cb1a10470f-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653896 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-client-ca\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.653942 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7kj\" (UniqueName: \"kubernetes.io/projected/cb512861-502a-4b1c-87ee-8ac96377663a-kube-api-access-8m7kj\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.655179 master-0 kubenswrapper[33141]: I0308 03:33:04.654414 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-config\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.655959 master-0 kubenswrapper[33141]: I0308 03:33:04.655820 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f0240fbc-0596-49fe-afb1-24cb1a10470f-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.656247 master-0 kubenswrapper[33141]: I0308 03:33:04.656202 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/9c708dee-3f8e-4c03-82bd-d94fec91ac44-monitoring-plugin-cert\") pod \"monitoring-plugin-5ccd479c8c-v4t2c\" (UID: \"9c708dee-3f8e-4c03-82bd-d94fec91ac44\") " pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:04.656448 master-0 kubenswrapper[33141]: I0308 03:33:04.656388 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-trusted-ca\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.656465 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-client-ca\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.657487 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-metrics-client-ca\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.657889 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb512861-502a-4b1c-87ee-8ac96377663a-serving-cert\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.658033 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456484f6-a19b-49f9-863b-f76e6f0c8c8f-config\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.658527 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/302e483a-6d6f-4a41-b4d7-3d11898277f4-serving-certs-ca-bundle\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.660971 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb512861-502a-4b1c-87ee-8ac96377663a-config\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.661218 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.661645 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-proxy-ca-bundles\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.663114 master-0 kubenswrapper[33141]: I0308 03:33:04.662290 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f0240fbc-0596-49fe-afb1-24cb1a10470f-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-zp6r4\" (UID: \"f0240fbc-0596-49fe-afb1-24cb1a10470f\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.663901 master-0 kubenswrapper[33141]: I0308 03:33:04.663235 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-telemeter-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.666185 master-0 kubenswrapper[33141]: I0308 03:33:04.666119 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-serving-cert\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.671250 master-0 kubenswrapper[33141]: I0308 03:33:04.671217 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-federate-client-tls\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.671631 master-0 kubenswrapper[33141]: I0308 03:33:04.671565 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456484f6-a19b-49f9-863b-f76e6f0c8c8f-serving-cert\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.672524 master-0 kubenswrapper[33141]: I0308 03:33:04.672478 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/302e483a-6d6f-4a41-b4d7-3d11898277f4-secret-telemeter-client\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.679241 master-0 kubenswrapper[33141]: I0308 03:33:04.679207 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7kj\" (UniqueName: \"kubernetes.io/projected/cb512861-502a-4b1c-87ee-8ac96377663a-kube-api-access-8m7kj\") pod \"route-controller-manager-7d5d8c978f-xfhx5\" (UID: \"cb512861-502a-4b1c-87ee-8ac96377663a\") " pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.681585 master-0 kubenswrapper[33141]: I0308 03:33:04.681483 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf28x\" (UniqueName: \"kubernetes.io/projected/456484f6-a19b-49f9-863b-f76e6f0c8c8f-kube-api-access-cf28x\") pod \"console-operator-6c7fb6b958-2cw9v\" (UID: \"456484f6-a19b-49f9-863b-f76e6f0c8c8f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.682300 master-0 kubenswrapper[33141]: I0308 03:33:04.682253 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlmb\" (UniqueName: \"kubernetes.io/projected/8bb32fcd-ca33-4cbf-b2b4-7000197032a9-kube-api-access-srlmb\") pod \"controller-manager-6b68cd84fb-w7r5z\" (UID: \"8bb32fcd-ca33-4cbf-b2b4-7000197032a9\") " pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.686473 master-0 kubenswrapper[33141]: I0308 03:33:04.686413 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j9m9\" (UniqueName: \"kubernetes.io/projected/302e483a-6d6f-4a41-b4d7-3d11898277f4-kube-api-access-5j9m9\") pod \"telemeter-client-5cb97dd5fc-g7fqr\" (UID: \"302e483a-6d6f-4a41-b4d7-3d11898277f4\") " pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:04.783223 master-0 kubenswrapper[33141]: I0308 03:33:04.783172 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:04.813885 master-0 kubenswrapper[33141]: I0308 03:33:04.813809 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:04.838305 master-0 kubenswrapper[33141]: I0308 03:33:04.838177 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:04.859871 master-0 kubenswrapper[33141]: I0308 03:33:04.859720 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:04.876385 master-0 kubenswrapper[33141]: I0308 03:33:04.876312 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" Mar 08 03:33:04.888617 master-0 kubenswrapper[33141]: I0308 03:33:04.888033 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" Mar 08 03:33:05.246127 master-0 kubenswrapper[33141]: I0308 03:33:05.246065 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c"] Mar 08 03:33:05.249024 master-0 kubenswrapper[33141]: W0308 03:33:05.248947 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c708dee_3f8e_4c03_82bd_d94fec91ac44.slice/crio-7a4b58f8ac9ba11af942df029dc40c565ff28ec2438069ad3c13dfa4c72c1ac9 WatchSource:0}: Error finding container 7a4b58f8ac9ba11af942df029dc40c565ff28ec2438069ad3c13dfa4c72c1ac9: Status 404 returned error can't find the container with id 7a4b58f8ac9ba11af942df029dc40c565ff28ec2438069ad3c13dfa4c72c1ac9 Mar 08 03:33:05.250784 master-0 kubenswrapper[33141]: I0308 03:33:05.250765 33141 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:33:05.312027 master-0 kubenswrapper[33141]: I0308 03:33:05.311978 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z"] Mar 08 03:33:05.317587 master-0 kubenswrapper[33141]: W0308 03:33:05.317538 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bb32fcd_ca33_4cbf_b2b4_7000197032a9.slice/crio-9eb1e19dd537ff0c130863d89196066b730524018c1f6df0b4c699e24f40dc6b WatchSource:0}: Error finding container 9eb1e19dd537ff0c130863d89196066b730524018c1f6df0b4c699e24f40dc6b: Status 404 returned error can't find the container with id 9eb1e19dd537ff0c130863d89196066b730524018c1f6df0b4c699e24f40dc6b Mar 08 03:33:05.377882 master-0 kubenswrapper[33141]: I0308 03:33:05.377814 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5"] Mar 08 03:33:05.383770 master-0 kubenswrapper[33141]: W0308 03:33:05.383716 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb512861_502a_4b1c_87ee_8ac96377663a.slice/crio-60bd5d0e704d1f01f12a5939f37e619b285b1eac1bd7d8100edee43bb4941b55 WatchSource:0}: Error finding container 60bd5d0e704d1f01f12a5939f37e619b285b1eac1bd7d8100edee43bb4941b55: Status 404 returned error can't find the container with id 60bd5d0e704d1f01f12a5939f37e619b285b1eac1bd7d8100edee43bb4941b55 Mar 08 03:33:05.534575 master-0 kubenswrapper[33141]: I0308 03:33:05.534523 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-2cw9v"] Mar 08 03:33:05.542625 master-0 kubenswrapper[33141]: I0308 03:33:05.542572 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr"] Mar 08 03:33:05.544498 master-0 kubenswrapper[33141]: I0308 03:33:05.544466 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4"] Mar 08 03:33:05.626219 master-0 kubenswrapper[33141]: I0308 03:33:05.626161 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" event={"ID":"456484f6-a19b-49f9-863b-f76e6f0c8c8f","Type":"ContainerStarted","Data":"af984a882d270b6a71b04c6c6954c2de496403f597e4afd7619579b47af75318"} Mar 08 03:33:05.627175 master-0 kubenswrapper[33141]: I0308 03:33:05.627152 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" event={"ID":"9c708dee-3f8e-4c03-82bd-d94fec91ac44","Type":"ContainerStarted","Data":"7a4b58f8ac9ba11af942df029dc40c565ff28ec2438069ad3c13dfa4c72c1ac9"} Mar 08 03:33:05.629165 master-0 kubenswrapper[33141]: I0308 03:33:05.629118 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" event={"ID":"cb512861-502a-4b1c-87ee-8ac96377663a","Type":"ContainerStarted","Data":"899a3270eee0c2d71d06676eb90bf1a10e607877562732ddf74d0a2c4a124ee4"} Mar 08 03:33:05.629260 master-0 kubenswrapper[33141]: I0308 03:33:05.629173 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" event={"ID":"cb512861-502a-4b1c-87ee-8ac96377663a","Type":"ContainerStarted","Data":"60bd5d0e704d1f01f12a5939f37e619b285b1eac1bd7d8100edee43bb4941b55"} Mar 08 03:33:05.629713 master-0 kubenswrapper[33141]: I0308 03:33:05.629678 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:05.632104 master-0 kubenswrapper[33141]: I0308 03:33:05.632040 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" event={"ID":"8bb32fcd-ca33-4cbf-b2b4-7000197032a9","Type":"ContainerStarted","Data":"7d3dca92325b649bc8fc45b17a682e5cf5a6eb6fb0de3640394d227a16341731"} Mar 08 03:33:05.632213 master-0 kubenswrapper[33141]: I0308 03:33:05.632108 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" event={"ID":"8bb32fcd-ca33-4cbf-b2b4-7000197032a9","Type":"ContainerStarted","Data":"9eb1e19dd537ff0c130863d89196066b730524018c1f6df0b4c699e24f40dc6b"} Mar 08 03:33:05.632437 master-0 kubenswrapper[33141]: I0308 03:33:05.632400 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:05.633773 master-0 kubenswrapper[33141]: I0308 03:33:05.633749 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" event={"ID":"f0240fbc-0596-49fe-afb1-24cb1a10470f","Type":"ContainerStarted","Data":"182ff417d5b5e41e4b0504626b1b46b1622093d0c1f2a58951e199ba6855f3f7"} Mar 08 03:33:05.637767 master-0 kubenswrapper[33141]: I0308 03:33:05.635101 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" event={"ID":"302e483a-6d6f-4a41-b4d7-3d11898277f4","Type":"ContainerStarted","Data":"01bd2078f078efc41a9494ae66cdb46a32b0ffd3d368aad3732bf44017789a23"} Mar 08 03:33:05.637767 master-0 kubenswrapper[33141]: I0308 03:33:05.637760 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" Mar 08 03:33:05.653018 master-0 kubenswrapper[33141]: I0308 03:33:05.652934 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" podStartSLOduration=9.65291256 podStartE2EDuration="9.65291256s" podCreationTimestamp="2026-03-08 03:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:33:05.648474754 +0000 UTC m=+99.518367947" watchObservedRunningTime="2026-03-08 03:33:05.65291256 +0000 UTC m=+99.522805753" Mar 08 03:33:05.666815 master-0 kubenswrapper[33141]: I0308 03:33:05.666655 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b68cd84fb-w7r5z" podStartSLOduration=9.666633178 podStartE2EDuration="9.666633178s" podCreationTimestamp="2026-03-08 03:32:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:33:05.666130975 +0000 UTC m=+99.536024188" watchObservedRunningTime="2026-03-08 03:33:05.666633178 +0000 UTC m=+99.536526381" Mar 08 03:33:05.792382 master-0 kubenswrapper[33141]: I0308 03:33:05.792282 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d5d8c978f-xfhx5" Mar 08 03:33:09.669899 master-0 kubenswrapper[33141]: I0308 03:33:09.669806 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" event={"ID":"302e483a-6d6f-4a41-b4d7-3d11898277f4","Type":"ContainerStarted","Data":"211d42023954b1f4e4dc6f3b310e3f889031389e18407f36f966bbec4db9b094"} Mar 08 03:33:09.672146 master-0 kubenswrapper[33141]: I0308 03:33:09.672067 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" event={"ID":"456484f6-a19b-49f9-863b-f76e6f0c8c8f","Type":"ContainerStarted","Data":"261a45d98217e3070e53e3e69088773883a7fc624ba48c490af4cc3c58aabb9d"} Mar 08 03:33:09.672352 master-0 kubenswrapper[33141]: I0308 03:33:09.672298 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:09.677842 master-0 kubenswrapper[33141]: I0308 03:33:09.675438 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" event={"ID":"9c708dee-3f8e-4c03-82bd-d94fec91ac44","Type":"ContainerStarted","Data":"8330512be78b8e2b749b232732ede95127c93ee14720cea65bf96267a7feccf5"} Mar 08 03:33:09.677842 master-0 kubenswrapper[33141]: I0308 03:33:09.675694 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:09.677842 master-0 kubenswrapper[33141]: I0308 03:33:09.676872 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" event={"ID":"f0240fbc-0596-49fe-afb1-24cb1a10470f","Type":"ContainerStarted","Data":"9f2b14c2b4713dcf171b45070358bdfb87857ec42039dda10ba3702984a47736"} Mar 08 03:33:09.679553 master-0 kubenswrapper[33141]: I0308 03:33:09.679502 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" Mar 08 03:33:09.683404 master-0 kubenswrapper[33141]: I0308 03:33:09.683350 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" Mar 08 03:33:09.835927 master-0 kubenswrapper[33141]: I0308 03:33:09.835810 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-2cw9v" podStartSLOduration=89.734837767 podStartE2EDuration="1m32.835795123s" podCreationTimestamp="2026-03-08 03:31:37 +0000 UTC" firstStartedPulling="2026-03-08 03:33:05.551359436 +0000 UTC m=+99.421252629" lastFinishedPulling="2026-03-08 03:33:08.652316792 +0000 UTC m=+102.522209985" observedRunningTime="2026-03-08 03:33:09.833892303 +0000 UTC m=+103.703785496" watchObservedRunningTime="2026-03-08 03:33:09.835795123 +0000 UTC m=+103.705688316" Mar 08 03:33:09.967141 master-0 kubenswrapper[33141]: I0308 03:33:09.966960 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-5ccd479c8c-v4t2c" podStartSLOduration=80.58652817 podStartE2EDuration="1m23.96694104s" podCreationTimestamp="2026-03-08 03:31:46 +0000 UTC" firstStartedPulling="2026-03-08 03:33:05.250713528 +0000 UTC m=+99.120606731" lastFinishedPulling="2026-03-08 03:33:08.631126408 +0000 UTC m=+102.501019601" observedRunningTime="2026-03-08 03:33:09.964734963 +0000 UTC m=+103.834628176" watchObservedRunningTime="2026-03-08 03:33:09.96694104 +0000 UTC m=+103.836834233" Mar 08 03:33:10.067478 master-0 kubenswrapper[33141]: I0308 03:33:10.067403 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-mnlxs"] Mar 08 03:33:10.070873 master-0 kubenswrapper[33141]: I0308 03:33:10.070824 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:10.072966 master-0 kubenswrapper[33141]: I0308 03:33:10.072654 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-sp7gt" Mar 08 03:33:10.073153 master-0 kubenswrapper[33141]: I0308 03:33:10.073089 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 03:33:10.073259 master-0 kubenswrapper[33141]: I0308 03:33:10.073235 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 03:33:10.144413 master-0 kubenswrapper[33141]: I0308 03:33:10.144346 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-mnlxs"] Mar 08 03:33:10.163955 master-0 kubenswrapper[33141]: I0308 03:33:10.162607 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbjm\" (UniqueName: \"kubernetes.io/projected/ffa263f5-3916-48bc-80f1-3f5aad28c9f9-kube-api-access-6vbjm\") pod \"downloads-84f57b9877-mnlxs\" (UID: \"ffa263f5-3916-48bc-80f1-3f5aad28c9f9\") " pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:10.264745 master-0 kubenswrapper[33141]: I0308 03:33:10.263673 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbjm\" (UniqueName: \"kubernetes.io/projected/ffa263f5-3916-48bc-80f1-3f5aad28c9f9-kube-api-access-6vbjm\") pod \"downloads-84f57b9877-mnlxs\" (UID: \"ffa263f5-3916-48bc-80f1-3f5aad28c9f9\") " pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:10.309295 master-0 kubenswrapper[33141]: I0308 03:33:10.309206 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-zp6r4" podStartSLOduration=62.221878906 podStartE2EDuration="1m5.309188775s" podCreationTimestamp="2026-03-08 03:32:05 +0000 UTC" firstStartedPulling="2026-03-08 03:33:05.551395217 +0000 UTC m=+99.421288410" lastFinishedPulling="2026-03-08 03:33:08.638705086 +0000 UTC m=+102.508598279" observedRunningTime="2026-03-08 03:33:10.307496781 +0000 UTC m=+104.177389994" watchObservedRunningTime="2026-03-08 03:33:10.309188775 +0000 UTC m=+104.179081978" Mar 08 03:33:10.324955 master-0 kubenswrapper[33141]: I0308 03:33:10.324077 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbjm\" (UniqueName: \"kubernetes.io/projected/ffa263f5-3916-48bc-80f1-3f5aad28c9f9-kube-api-access-6vbjm\") pod \"downloads-84f57b9877-mnlxs\" (UID: \"ffa263f5-3916-48bc-80f1-3f5aad28c9f9\") " pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:10.391998 master-0 kubenswrapper[33141]: I0308 03:33:10.391933 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:10.628996 master-0 kubenswrapper[33141]: I0308 03:33:10.628932 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:33:10.629755 master-0 kubenswrapper[33141]: I0308 03:33:10.629719 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.630603 master-0 kubenswrapper[33141]: I0308 03:33:10.630569 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:33:10.630855 master-0 kubenswrapper[33141]: I0308 03:33:10.630795 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" containerID="cri-o://7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942" gracePeriod=15 Mar 08 03:33:10.631051 master-0 kubenswrapper[33141]: I0308 03:33:10.630974 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e" gracePeriod=15 Mar 08 03:33:10.631051 master-0 kubenswrapper[33141]: I0308 03:33:10.631030 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd" gracePeriod=15 Mar 08 03:33:10.631152 master-0 kubenswrapper[33141]: I0308 03:33:10.631060 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9" gracePeriod=15 Mar 08 03:33:10.631152 master-0 kubenswrapper[33141]: I0308 03:33:10.631091 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e" gracePeriod=15 Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633352 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633495 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633506 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633520 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633526 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633534 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633540 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633554 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633560 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633573 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633579 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633588 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633594 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: E0308 03:33:10.633604 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633609 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633707 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633714 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633725 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633732 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633743 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 08 03:33:10.637345 master-0 kubenswrapper[33141]: I0308 03:33:10.633753 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 08 03:33:10.772563 master-0 kubenswrapper[33141]: I0308 03:33:10.772513 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.773098 master-0 kubenswrapper[33141]: I0308 03:33:10.772886 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.773098 master-0 kubenswrapper[33141]: I0308 03:33:10.773064 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.773187 master-0 kubenswrapper[33141]: I0308 03:33:10.773161 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.773360 master-0 kubenswrapper[33141]: I0308 03:33:10.773325 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.773504 master-0 kubenswrapper[33141]: I0308 03:33:10.773464 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.773571 master-0 kubenswrapper[33141]: I0308 03:33:10.773539 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.773672 master-0 kubenswrapper[33141]: I0308 03:33:10.773647 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.788093 master-0 kubenswrapper[33141]: I0308 03:33:10.787998 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:33:10.849194 master-0 kubenswrapper[33141]: I0308 03:33:10.846732 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-mnlxs"] Mar 08 03:33:10.875014 master-0 kubenswrapper[33141]: I0308 03:33:10.874785 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.875014 master-0 kubenswrapper[33141]: I0308 03:33:10.874887 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.875014 master-0 kubenswrapper[33141]: I0308 03:33:10.874947 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.875014 master-0 kubenswrapper[33141]: I0308 03:33:10.874972 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.875014 master-0 kubenswrapper[33141]: I0308 03:33:10.875001 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875040 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875098 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875141 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875245 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875381 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875422 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875460 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875611 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875654 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875697 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:10.876159 master-0 kubenswrapper[33141]: I0308 03:33:10.875737 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:10.896687 master-0 kubenswrapper[33141]: W0308 03:33:10.896636 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffa263f5_3916_48bc_80f1_3f5aad28c9f9.slice/crio-eb4e58f7f74de6fce657434363bab385c1c1f8995acf6e45dd36359e026a485f WatchSource:0}: Error finding container eb4e58f7f74de6fce657434363bab385c1c1f8995acf6e45dd36359e026a485f: Status 404 returned error can't find the container with id eb4e58f7f74de6fce657434363bab385c1c1f8995acf6e45dd36359e026a485f Mar 08 03:33:10.902407 master-0 kubenswrapper[33141]: E0308 03:33:10.902134 33141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{telemeter-client-5cb97dd5fc-g7fqr.189ac041816dc9f8 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-5cb97dd5fc-g7fqr,UID:302e483a-6d6f-4a41-b4d7-3d11898277f4,APIVersion:v1,ResourceVersion:15640,FieldPath:spec.containers{reload},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f\" in 2.036s (2.036s including waiting). Image size: 437909442 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:33:10.900574712 +0000 UTC m=+104.770467915,LastTimestamp:2026-03-08 03:33:10.900574712 +0000 UTC m=+104.770467915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:33:11.087472 master-0 kubenswrapper[33141]: I0308 03:33:11.087388 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:11.121520 master-0 kubenswrapper[33141]: W0308 03:33:11.121295 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda814bd60de133d95cf99630a978c017e.slice/crio-55b3da07a24825cb219b7c6ab60833b0ea376086957c339d86f70d3c314395f6 WatchSource:0}: Error finding container 55b3da07a24825cb219b7c6ab60833b0ea376086957c339d86f70d3c314395f6: Status 404 returned error can't find the container with id 55b3da07a24825cb219b7c6ab60833b0ea376086957c339d86f70d3c314395f6 Mar 08 03:33:11.693637 master-0 kubenswrapper[33141]: I0308 03:33:11.693443 33141 generic.go:334] "Generic (PLEG): container finished" podID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" containerID="2b34b277e4a2839792fd0e13357068de7901d4501e66a718f17861b80f532b3f" exitCode=0 Mar 08 03:33:11.693637 master-0 kubenswrapper[33141]: I0308 03:33:11.693536 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0f958554-d0e0-4a2d-84e8-17e20ae7625c","Type":"ContainerDied","Data":"2b34b277e4a2839792fd0e13357068de7901d4501e66a718f17861b80f532b3f"} Mar 08 03:33:11.694980 master-0 kubenswrapper[33141]: I0308 03:33:11.694849 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.695648 master-0 kubenswrapper[33141]: I0308 03:33:11.695575 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a"} Mar 08 03:33:11.695648 master-0 kubenswrapper[33141]: I0308 03:33:11.695630 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"55b3da07a24825cb219b7c6ab60833b0ea376086957c339d86f70d3c314395f6"} Mar 08 03:33:11.695998 master-0 kubenswrapper[33141]: I0308 03:33:11.695878 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.697177 master-0 kubenswrapper[33141]: I0308 03:33:11.697102 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.698378 master-0 kubenswrapper[33141]: I0308 03:33:11.698306 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.699277 master-0 kubenswrapper[33141]: I0308 03:33:11.699209 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.700552 master-0 kubenswrapper[33141]: I0308 03:33:11.700463 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.701071 master-0 kubenswrapper[33141]: I0308 03:33:11.700998 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" event={"ID":"302e483a-6d6f-4a41-b4d7-3d11898277f4","Type":"ContainerStarted","Data":"90f47e8c52d2151307bc0e2d62b2c9d56705249cf352137b97e759fb5d5dc696"} Mar 08 03:33:11.701071 master-0 kubenswrapper[33141]: I0308 03:33:11.701060 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" event={"ID":"302e483a-6d6f-4a41-b4d7-3d11898277f4","Type":"ContainerStarted","Data":"fec03aa9ef4e3d586e894914ff5ec9a17a71bcbcbebde4d5880e39c86884a601"} Mar 08 03:33:11.702347 master-0 kubenswrapper[33141]: I0308 03:33:11.702261 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-mnlxs" event={"ID":"ffa263f5-3916-48bc-80f1-3f5aad28c9f9","Type":"ContainerStarted","Data":"eb4e58f7f74de6fce657434363bab385c1c1f8995acf6e45dd36359e026a485f"} Mar 08 03:33:11.702614 master-0 kubenswrapper[33141]: I0308 03:33:11.702540 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.703948 master-0 kubenswrapper[33141]: I0308 03:33:11.703838 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.705155 master-0 kubenswrapper[33141]: I0308 03:33:11.705065 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.705328 master-0 kubenswrapper[33141]: I0308 03:33:11.705283 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 08 03:33:11.706197 master-0 kubenswrapper[33141]: I0308 03:33:11.706123 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:11.707230 master-0 kubenswrapper[33141]: I0308 03:33:11.707171 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/1.log" Mar 08 03:33:11.708236 master-0 kubenswrapper[33141]: I0308 03:33:11.708181 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e" exitCode=0 Mar 08 03:33:11.708236 master-0 kubenswrapper[33141]: I0308 03:33:11.708220 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd" exitCode=0 Mar 08 03:33:11.708452 master-0 kubenswrapper[33141]: I0308 03:33:11.708241 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9" exitCode=0 Mar 08 03:33:11.708452 master-0 kubenswrapper[33141]: I0308 03:33:11.708256 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e" exitCode=2 Mar 08 03:33:11.708452 master-0 kubenswrapper[33141]: I0308 03:33:11.708295 33141 scope.go:117] "RemoveContainer" containerID="29daacb2c26fcf18f9f3b673ab22e9e9aa0de4d9b19b229cdf38f36ca276b550" Mar 08 03:33:12.717599 master-0 kubenswrapper[33141]: I0308 03:33:12.717538 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/1.log" Mar 08 03:33:13.203339 master-0 kubenswrapper[33141]: I0308 03:33:13.203264 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:33:13.205061 master-0 kubenswrapper[33141]: I0308 03:33:13.204973 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.206155 master-0 kubenswrapper[33141]: I0308 03:33:13.206095 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.206847 master-0 kubenswrapper[33141]: I0308 03:33:13.206800 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.210935 master-0 kubenswrapper[33141]: I0308 03:33:13.210882 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/1.log" Mar 08 03:33:13.212361 master-0 kubenswrapper[33141]: I0308 03:33:13.212331 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:13.213432 master-0 kubenswrapper[33141]: I0308 03:33:13.213349 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.214156 master-0 kubenswrapper[33141]: I0308 03:33:13.214108 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.214787 master-0 kubenswrapper[33141]: I0308 03:33:13.214738 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.215470 master-0 kubenswrapper[33141]: I0308 03:33:13.215420 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.377891 master-0 kubenswrapper[33141]: I0308 03:33:13.377818 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access\") pod \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " Mar 08 03:33:13.377891 master-0 kubenswrapper[33141]: I0308 03:33:13.377879 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.377946 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock\") pod \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.377972 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir\") pod \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\" (UID: \"0f958554-d0e0-4a2d-84e8-17e20ae7625c\") " Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378040 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378094 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378195 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock" (OuterVolumeSpecName: "var-lock") pod "0f958554-d0e0-4a2d-84e8-17e20ae7625c" (UID: "0f958554-d0e0-4a2d-84e8-17e20ae7625c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378167 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378285 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:13.378327 master-0 kubenswrapper[33141]: I0308 03:33:13.378324 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:13.378810 master-0 kubenswrapper[33141]: I0308 03:33:13.378356 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0f958554-d0e0-4a2d-84e8-17e20ae7625c" (UID: "0f958554-d0e0-4a2d-84e8-17e20ae7625c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:13.378888 master-0 kubenswrapper[33141]: I0308 03:33:13.378844 33141 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.378888 master-0 kubenswrapper[33141]: I0308 03:33:13.378876 33141 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.379052 master-0 kubenswrapper[33141]: I0308 03:33:13.378939 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.379052 master-0 kubenswrapper[33141]: I0308 03:33:13.378973 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.379052 master-0 kubenswrapper[33141]: I0308 03:33:13.378990 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.382091 master-0 kubenswrapper[33141]: I0308 03:33:13.382036 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0f958554-d0e0-4a2d-84e8-17e20ae7625c" (UID: "0f958554-d0e0-4a2d-84e8-17e20ae7625c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:33:13.480880 master-0 kubenswrapper[33141]: I0308 03:33:13.480773 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f958554-d0e0-4a2d-84e8-17e20ae7625c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:13.727889 master-0 kubenswrapper[33141]: I0308 03:33:13.727703 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"0f958554-d0e0-4a2d-84e8-17e20ae7625c","Type":"ContainerDied","Data":"96ae8ea1742c004dc67f72a928be3799103a0e75de703bed9bb0e13766811751"} Mar 08 03:33:13.727889 master-0 kubenswrapper[33141]: I0308 03:33:13.727758 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ae8ea1742c004dc67f72a928be3799103a0e75de703bed9bb0e13766811751" Mar 08 03:33:13.727889 master-0 kubenswrapper[33141]: I0308 03:33:13.727729 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 08 03:33:13.730994 master-0 kubenswrapper[33141]: I0308 03:33:13.730951 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/1.log" Mar 08 03:33:13.731808 master-0 kubenswrapper[33141]: I0308 03:33:13.731759 33141 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942" exitCode=0 Mar 08 03:33:13.731898 master-0 kubenswrapper[33141]: I0308 03:33:13.731835 33141 scope.go:117] "RemoveContainer" containerID="dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e" Mar 08 03:33:13.731898 master-0 kubenswrapper[33141]: I0308 03:33:13.731840 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:13.745920 master-0 kubenswrapper[33141]: I0308 03:33:13.745847 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.746604 master-0 kubenswrapper[33141]: I0308 03:33:13.746526 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.747436 master-0 kubenswrapper[33141]: I0308 03:33:13.747308 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.748316 master-0 kubenswrapper[33141]: I0308 03:33:13.748263 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.755639 master-0 kubenswrapper[33141]: I0308 03:33:13.755597 33141 scope.go:117] "RemoveContainer" containerID="d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd" Mar 08 03:33:13.758407 master-0 kubenswrapper[33141]: I0308 03:33:13.758332 33141 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.759216 master-0 kubenswrapper[33141]: I0308 03:33:13.759165 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.760310 master-0 kubenswrapper[33141]: I0308 03:33:13.760031 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.763622 master-0 kubenswrapper[33141]: I0308 03:33:13.763563 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:13.777035 master-0 kubenswrapper[33141]: I0308 03:33:13.776992 33141 scope.go:117] "RemoveContainer" containerID="4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9" Mar 08 03:33:13.797617 master-0 kubenswrapper[33141]: I0308 03:33:13.797579 33141 scope.go:117] "RemoveContainer" containerID="cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e" Mar 08 03:33:13.814421 master-0 kubenswrapper[33141]: I0308 03:33:13.814300 33141 scope.go:117] "RemoveContainer" containerID="7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942" Mar 08 03:33:13.829366 master-0 kubenswrapper[33141]: I0308 03:33:13.829310 33141 scope.go:117] "RemoveContainer" containerID="b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867" Mar 08 03:33:13.846528 master-0 kubenswrapper[33141]: I0308 03:33:13.846480 33141 scope.go:117] "RemoveContainer" containerID="dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e" Mar 08 03:33:13.847009 master-0 kubenswrapper[33141]: E0308 03:33:13.846947 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e\": container with ID starting with dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e not found: ID does not exist" containerID="dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e" Mar 08 03:33:13.847093 master-0 kubenswrapper[33141]: I0308 03:33:13.847025 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e"} err="failed to get container status \"dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e\": rpc error: code = NotFound desc = could not find container \"dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e\": container with ID starting with dbdcea86ab103c9cb9c14c9c2273f8b5a1a4edbdcf1befddd3bc41a62bab119e not found: ID does not exist" Mar 08 03:33:13.847093 master-0 kubenswrapper[33141]: I0308 03:33:13.847074 33141 scope.go:117] "RemoveContainer" containerID="d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd" Mar 08 03:33:13.847588 master-0 kubenswrapper[33141]: E0308 03:33:13.847512 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd\": container with ID starting with d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd not found: ID does not exist" containerID="d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd" Mar 08 03:33:13.847670 master-0 kubenswrapper[33141]: I0308 03:33:13.847595 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd"} err="failed to get container status \"d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd\": rpc error: code = NotFound desc = could not find container \"d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd\": container with ID starting with d43311a2bb6696a281415deafd9508871e6a5380ebdde27cb56f0a0571bc31fd not found: ID does not exist" Mar 08 03:33:13.847670 master-0 kubenswrapper[33141]: I0308 03:33:13.847639 33141 scope.go:117] "RemoveContainer" containerID="4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9" Mar 08 03:33:13.848098 master-0 kubenswrapper[33141]: E0308 03:33:13.848037 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9\": container with ID starting with 4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9 not found: ID does not exist" containerID="4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9" Mar 08 03:33:13.848147 master-0 kubenswrapper[33141]: I0308 03:33:13.848105 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9"} err="failed to get container status \"4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9\": rpc error: code = NotFound desc = could not find container \"4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9\": container with ID starting with 4a787b02c2810c812e80cb502488def3ecbbbab6d8a206695ecdb4d86c0073a9 not found: ID does not exist" Mar 08 03:33:13.848202 master-0 kubenswrapper[33141]: I0308 03:33:13.848158 33141 scope.go:117] "RemoveContainer" containerID="cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e" Mar 08 03:33:13.848578 master-0 kubenswrapper[33141]: E0308 03:33:13.848543 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e\": container with ID starting with cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e not found: ID does not exist" containerID="cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e" Mar 08 03:33:13.848654 master-0 kubenswrapper[33141]: I0308 03:33:13.848625 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e"} err="failed to get container status \"cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e\": rpc error: code = NotFound desc = could not find container \"cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e\": container with ID starting with cc4330769e3e177675679ac64fa42ad67f0209b63a892fa3de4c906aeb66af3e not found: ID does not exist" Mar 08 03:33:13.848702 master-0 kubenswrapper[33141]: I0308 03:33:13.848653 33141 scope.go:117] "RemoveContainer" containerID="7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942" Mar 08 03:33:13.851577 master-0 kubenswrapper[33141]: E0308 03:33:13.851537 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942\": container with ID starting with 7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942 not found: ID does not exist" containerID="7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942" Mar 08 03:33:13.851662 master-0 kubenswrapper[33141]: I0308 03:33:13.851575 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942"} err="failed to get container status \"7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942\": rpc error: code = NotFound desc = could not find container \"7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942\": container with ID starting with 7e97224b271117a1493a120cc3fe337bdfe3bb4220cf72a64e99825547818942 not found: ID does not exist" Mar 08 03:33:13.851662 master-0 kubenswrapper[33141]: I0308 03:33:13.851601 33141 scope.go:117] "RemoveContainer" containerID="b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867" Mar 08 03:33:13.852152 master-0 kubenswrapper[33141]: E0308 03:33:13.852105 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867\": container with ID starting with b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867 not found: ID does not exist" containerID="b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867" Mar 08 03:33:13.852225 master-0 kubenswrapper[33141]: I0308 03:33:13.852154 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867"} err="failed to get container status \"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867\": rpc error: code = NotFound desc = could not find container \"b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867\": container with ID starting with b304608a015d2315aefa117ee7abc73ca3562cc6ec205cc71c44d00892a24867 not found: ID does not exist" Mar 08 03:33:14.361181 master-0 kubenswrapper[33141]: I0308 03:33:14.361121 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077dd10388b9e3e48a07382126e86621" path="/var/lib/kubelet/pods/077dd10388b9e3e48a07382126e86621/volumes" Mar 08 03:33:16.364617 master-0 kubenswrapper[33141]: E0308 03:33:16.364562 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.365314 master-0 kubenswrapper[33141]: I0308 03:33:16.364552 33141 status_manager.go:851] "Failed to get status for pod" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" pod="openshift-console/downloads-84f57b9877-mnlxs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-mnlxs\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.366709 master-0 kubenswrapper[33141]: E0308 03:33:16.365515 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.366709 master-0 kubenswrapper[33141]: I0308 03:33:16.366444 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.366709 master-0 kubenswrapper[33141]: E0308 03:33:16.366440 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.367328 master-0 kubenswrapper[33141]: E0308 03:33:16.367290 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.367818 master-0 kubenswrapper[33141]: E0308 03:33:16.367786 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.367818 master-0 kubenswrapper[33141]: I0308 03:33:16.367816 33141 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 03:33:16.368484 master-0 kubenswrapper[33141]: E0308 03:33:16.368455 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 08 03:33:16.368631 master-0 kubenswrapper[33141]: I0308 03:33:16.368591 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.369454 master-0 kubenswrapper[33141]: I0308 03:33:16.369212 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:16.570809 master-0 kubenswrapper[33141]: E0308 03:33:16.570547 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 08 03:33:16.971567 master-0 kubenswrapper[33141]: E0308 03:33:16.971414 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 08 03:33:17.772925 master-0 kubenswrapper[33141]: E0308 03:33:17.772830 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 08 03:33:18.214810 master-0 kubenswrapper[33141]: E0308 03:33:18.214457 33141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{telemeter-client-5cb97dd5fc-g7fqr.189ac041816dc9f8 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:telemeter-client-5cb97dd5fc-g7fqr,UID:302e483a-6d6f-4a41-b4d7-3d11898277f4,APIVersion:v1,ResourceVersion:15640,FieldPath:spec.containers{reload},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4b4b9d1eb0e1c0c6ac080f177364ad36f99279aa89ae66c06f5b6d035f121f\" in 2.036s (2.036s including waiting). Image size: 437909442 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:33:10.900574712 +0000 UTC m=+104.770467915,LastTimestamp:2026-03-08 03:33:10.900574712 +0000 UTC m=+104.770467915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:33:19.374811 master-0 kubenswrapper[33141]: E0308 03:33:19.374739 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 08 03:33:22.576497 master-0 kubenswrapper[33141]: E0308 03:33:22.576426 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 08 03:33:23.350005 master-0 kubenswrapper[33141]: I0308 03:33:23.349948 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:23.351821 master-0 kubenswrapper[33141]: I0308 03:33:23.351752 33141 status_manager.go:851] "Failed to get status for pod" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" pod="openshift-console/downloads-84f57b9877-mnlxs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-mnlxs\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.352482 master-0 kubenswrapper[33141]: I0308 03:33:23.352439 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.352926 master-0 kubenswrapper[33141]: I0308 03:33:23.352867 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.353377 master-0 kubenswrapper[33141]: I0308 03:33:23.353335 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.371756 master-0 kubenswrapper[33141]: I0308 03:33:23.371728 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:23.371756 master-0 kubenswrapper[33141]: I0308 03:33:23.371753 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:23.372408 master-0 kubenswrapper[33141]: E0308 03:33:23.372364 33141 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:23.373043 master-0 kubenswrapper[33141]: I0308 03:33:23.373022 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:23.399288 master-0 kubenswrapper[33141]: W0308 03:33:23.399245 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d4251d3504cdc0ec85144c1379056c.slice/crio-17ed079539f27e4b914ad2c94987494a1276984ac3a4436a62af425f56a80844 WatchSource:0}: Error finding container 17ed079539f27e4b914ad2c94987494a1276984ac3a4436a62af425f56a80844: Status 404 returned error can't find the container with id 17ed079539f27e4b914ad2c94987494a1276984ac3a4436a62af425f56a80844 Mar 08 03:33:23.846454 master-0 kubenswrapper[33141]: I0308 03:33:23.846367 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/0.log" Mar 08 03:33:23.847251 master-0 kubenswrapper[33141]: I0308 03:33:23.846462 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="efbf585c23fc1e979a8521b267e8220f735c3268158b1f137e28d2cce1acecfb" exitCode=1 Mar 08 03:33:23.847251 master-0 kubenswrapper[33141]: I0308 03:33:23.846595 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerDied","Data":"efbf585c23fc1e979a8521b267e8220f735c3268158b1f137e28d2cce1acecfb"} Mar 08 03:33:23.847392 master-0 kubenswrapper[33141]: I0308 03:33:23.847268 33141 scope.go:117] "RemoveContainer" containerID="efbf585c23fc1e979a8521b267e8220f735c3268158b1f137e28d2cce1acecfb" Mar 08 03:33:23.848825 master-0 kubenswrapper[33141]: I0308 03:33:23.848752 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.849545 master-0 kubenswrapper[33141]: I0308 03:33:23.849494 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456" exitCode=0 Mar 08 03:33:23.849545 master-0 kubenswrapper[33141]: I0308 03:33:23.849543 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerDied","Data":"4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456"} Mar 08 03:33:23.849719 master-0 kubenswrapper[33141]: I0308 03:33:23.849572 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"17ed079539f27e4b914ad2c94987494a1276984ac3a4436a62af425f56a80844"} Mar 08 03:33:23.850001 master-0 kubenswrapper[33141]: I0308 03:33:23.849876 33141 status_manager.go:851] "Failed to get status for pod" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" pod="openshift-console/downloads-84f57b9877-mnlxs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-mnlxs\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.850106 master-0 kubenswrapper[33141]: I0308 03:33:23.849954 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:23.850175 master-0 kubenswrapper[33141]: I0308 03:33:23.850115 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:23.851053 master-0 kubenswrapper[33141]: I0308 03:33:23.850941 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.851144 master-0 kubenswrapper[33141]: E0308 03:33:23.850959 33141 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:23.853024 master-0 kubenswrapper[33141]: I0308 03:33:23.852953 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.854049 master-0 kubenswrapper[33141]: I0308 03:33:23.853985 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.855773 master-0 kubenswrapper[33141]: I0308 03:33:23.855707 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.856768 master-0 kubenswrapper[33141]: I0308 03:33:23.856718 33141 status_manager.go:851] "Failed to get status for pod" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" pod="openshift-console/downloads-84f57b9877-mnlxs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-mnlxs\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.857633 master-0 kubenswrapper[33141]: I0308 03:33:23.857583 33141 status_manager.go:851] "Failed to get status for pod" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.859563 master-0 kubenswrapper[33141]: I0308 03:33:23.859463 33141 status_manager.go:851] "Failed to get status for pod" podUID="302e483a-6d6f-4a41-b4d7-3d11898277f4" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-5cb97dd5fc-g7fqr\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:23.860493 master-0 kubenswrapper[33141]: I0308 03:33:23.860450 33141 status_manager.go:851] "Failed to get status for pod" podUID="a814bd60de133d95cf99630a978c017e" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:33:24.871621 master-0 kubenswrapper[33141]: I0308 03:33:24.871576 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/0.log" Mar 08 03:33:24.871977 master-0 kubenswrapper[33141]: I0308 03:33:24.871669 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} Mar 08 03:33:24.880984 master-0 kubenswrapper[33141]: I0308 03:33:24.880072 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2"} Mar 08 03:33:24.880984 master-0 kubenswrapper[33141]: I0308 03:33:24.880122 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad"} Mar 08 03:33:24.880984 master-0 kubenswrapper[33141]: I0308 03:33:24.880137 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97"} Mar 08 03:33:25.889431 master-0 kubenswrapper[33141]: I0308 03:33:25.889379 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9"} Mar 08 03:33:25.889431 master-0 kubenswrapper[33141]: I0308 03:33:25.889425 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795"} Mar 08 03:33:25.890052 master-0 kubenswrapper[33141]: I0308 03:33:25.889641 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:25.890052 master-0 kubenswrapper[33141]: I0308 03:33:25.889658 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:25.890052 master-0 kubenswrapper[33141]: I0308 03:33:25.889848 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:27.430122 master-0 kubenswrapper[33141]: I0308 03:33:27.430026 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:33:27.430122 master-0 kubenswrapper[33141]: I0308 03:33:27.430085 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:33:27.434815 master-0 kubenswrapper[33141]: I0308 03:33:27.434754 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:33:28.373862 master-0 kubenswrapper[33141]: I0308 03:33:28.373776 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:28.373862 master-0 kubenswrapper[33141]: I0308 03:33:28.373861 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:28.384213 master-0 kubenswrapper[33141]: I0308 03:33:28.383971 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:30.905232 master-0 kubenswrapper[33141]: I0308 03:33:30.905177 33141 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:30.929875 master-0 kubenswrapper[33141]: I0308 03:33:30.929794 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:30.929875 master-0 kubenswrapper[33141]: I0308 03:33:30.929871 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:30.939455 master-0 kubenswrapper[33141]: I0308 03:33:30.939416 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:30.941524 master-0 kubenswrapper[33141]: I0308 03:33:30.941491 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:33:31.940337 master-0 kubenswrapper[33141]: I0308 03:33:31.940200 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:31.940337 master-0 kubenswrapper[33141]: I0308 03:33:31.940267 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="23a8ec34-6b47-46e5-b5b2-35e20153dca7" Mar 08 03:33:36.379965 master-0 kubenswrapper[33141]: I0308 03:33:36.379882 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:33:37.435694 master-0 kubenswrapper[33141]: I0308 03:33:37.435601 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:33:40.419038 master-0 kubenswrapper[33141]: I0308 03:33:40.418972 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 03:33:41.174616 master-0 kubenswrapper[33141]: I0308 03:33:41.174553 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-d4zhc" Mar 08 03:33:41.215553 master-0 kubenswrapper[33141]: I0308 03:33:41.215500 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 03:33:42.111997 master-0 kubenswrapper[33141]: I0308 03:33:42.111938 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rgflg" Mar 08 03:33:42.121078 master-0 kubenswrapper[33141]: I0308 03:33:42.120927 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 03:33:42.494037 master-0 kubenswrapper[33141]: I0308 03:33:42.493934 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 03:33:42.506216 master-0 kubenswrapper[33141]: I0308 03:33:42.506182 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 03:33:42.646673 master-0 kubenswrapper[33141]: I0308 03:33:42.646612 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-278m6" Mar 08 03:33:42.861410 master-0 kubenswrapper[33141]: I0308 03:33:42.861346 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:33:43.160207 master-0 kubenswrapper[33141]: I0308 03:33:43.160079 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-t6pd7" Mar 08 03:33:43.175595 master-0 kubenswrapper[33141]: I0308 03:33:43.175549 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 03:33:43.321708 master-0 kubenswrapper[33141]: I0308 03:33:43.321644 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 03:33:43.379826 master-0 kubenswrapper[33141]: I0308 03:33:43.379777 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 03:33:43.654654 master-0 kubenswrapper[33141]: I0308 03:33:43.654573 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 03:33:43.933089 master-0 kubenswrapper[33141]: I0308 03:33:43.932875 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 03:33:43.950930 master-0 kubenswrapper[33141]: I0308 03:33:43.950874 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 03:33:43.953567 master-0 kubenswrapper[33141]: I0308 03:33:43.953532 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-7hbhc" Mar 08 03:33:43.962668 master-0 kubenswrapper[33141]: I0308 03:33:43.962615 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 03:33:44.106488 master-0 kubenswrapper[33141]: I0308 03:33:44.106398 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 03:33:44.367896 master-0 kubenswrapper[33141]: I0308 03:33:44.367832 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 03:33:44.377519 master-0 kubenswrapper[33141]: I0308 03:33:44.377444 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p5nps" Mar 08 03:33:44.538216 master-0 kubenswrapper[33141]: I0308 03:33:44.538151 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 03:33:44.691803 master-0 kubenswrapper[33141]: I0308 03:33:44.691703 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 03:33:44.726788 master-0 kubenswrapper[33141]: I0308 03:33:44.726727 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 03:33:44.754882 master-0 kubenswrapper[33141]: I0308 03:33:44.754821 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 03:33:44.787021 master-0 kubenswrapper[33141]: I0308 03:33:44.786819 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 03:33:45.040970 master-0 kubenswrapper[33141]: I0308 03:33:45.040895 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 03:33:45.058509 master-0 kubenswrapper[33141]: I0308 03:33:45.058446 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 03:33:45.104524 master-0 kubenswrapper[33141]: I0308 03:33:45.104443 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 03:33:45.176223 master-0 kubenswrapper[33141]: I0308 03:33:45.175403 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 03:33:45.176489 master-0 kubenswrapper[33141]: I0308 03:33:45.176209 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 03:33:45.206603 master-0 kubenswrapper[33141]: I0308 03:33:45.203754 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:33:45.216809 master-0 kubenswrapper[33141]: I0308 03:33:45.216738 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 03:33:45.249739 master-0 kubenswrapper[33141]: I0308 03:33:45.249678 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 03:33:45.331492 master-0 kubenswrapper[33141]: I0308 03:33:45.331315 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:33:45.399168 master-0 kubenswrapper[33141]: I0308 03:33:45.399131 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 03:33:45.400800 master-0 kubenswrapper[33141]: I0308 03:33:45.400758 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 03:33:45.400980 master-0 kubenswrapper[33141]: I0308 03:33:45.400961 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-h5rwm" Mar 08 03:33:45.428572 master-0 kubenswrapper[33141]: I0308 03:33:45.428538 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 03:33:45.432639 master-0 kubenswrapper[33141]: I0308 03:33:45.432617 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 03:33:45.451390 master-0 kubenswrapper[33141]: I0308 03:33:45.451359 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 03:33:45.504858 master-0 kubenswrapper[33141]: I0308 03:33:45.504820 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 03:33:45.542297 master-0 kubenswrapper[33141]: I0308 03:33:45.542248 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-sp7gt" Mar 08 03:33:45.582216 master-0 kubenswrapper[33141]: I0308 03:33:45.582106 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 03:33:45.631230 master-0 kubenswrapper[33141]: I0308 03:33:45.631186 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 03:33:45.641653 master-0 kubenswrapper[33141]: I0308 03:33:45.641614 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 03:33:45.827550 master-0 kubenswrapper[33141]: I0308 03:33:45.827490 33141 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 03:33:45.852726 master-0 kubenswrapper[33141]: I0308 03:33:45.852594 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 03:33:46.003618 master-0 kubenswrapper[33141]: I0308 03:33:46.003542 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 03:33:46.005851 master-0 kubenswrapper[33141]: I0308 03:33:46.005815 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 03:33:46.068367 master-0 kubenswrapper[33141]: I0308 03:33:46.068287 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 03:33:46.153434 master-0 kubenswrapper[33141]: I0308 03:33:46.153293 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 03:33:46.156554 master-0 kubenswrapper[33141]: I0308 03:33:46.156492 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-s25xz" Mar 08 03:33:46.163136 master-0 kubenswrapper[33141]: I0308 03:33:46.163077 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 03:33:46.203889 master-0 kubenswrapper[33141]: I0308 03:33:46.203835 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 03:33:46.243763 master-0 kubenswrapper[33141]: I0308 03:33:46.243698 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 03:33:46.338198 master-0 kubenswrapper[33141]: I0308 03:33:46.338142 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 03:33:46.420463 master-0 kubenswrapper[33141]: I0308 03:33:46.420324 33141 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 03:33:46.519530 master-0 kubenswrapper[33141]: I0308 03:33:46.519472 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 03:33:46.549260 master-0 kubenswrapper[33141]: I0308 03:33:46.549198 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:33:46.647273 master-0 kubenswrapper[33141]: I0308 03:33:46.647165 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 03:33:46.688977 master-0 kubenswrapper[33141]: I0308 03:33:46.688598 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:33:46.713749 master-0 kubenswrapper[33141]: I0308 03:33:46.713644 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 03:33:46.727542 master-0 kubenswrapper[33141]: I0308 03:33:46.727466 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-da0kci31im4hq" Mar 08 03:33:46.754725 master-0 kubenswrapper[33141]: I0308 03:33:46.754659 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 03:33:46.782832 master-0 kubenswrapper[33141]: I0308 03:33:46.782776 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 03:33:46.794788 master-0 kubenswrapper[33141]: I0308 03:33:46.794731 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 03:33:46.815563 master-0 kubenswrapper[33141]: I0308 03:33:46.815511 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 03:33:46.834923 master-0 kubenswrapper[33141]: I0308 03:33:46.834846 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 03:33:46.835071 master-0 kubenswrapper[33141]: I0308 03:33:46.835050 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 03:33:46.872278 master-0 kubenswrapper[33141]: I0308 03:33:46.872185 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 03:33:46.882260 master-0 kubenswrapper[33141]: I0308 03:33:46.882178 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 03:33:46.935644 master-0 kubenswrapper[33141]: I0308 03:33:46.935565 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 03:33:46.943592 master-0 kubenswrapper[33141]: I0308 03:33:46.943453 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 03:33:46.965975 master-0 kubenswrapper[33141]: I0308 03:33:46.965685 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 03:33:46.967096 master-0 kubenswrapper[33141]: I0308 03:33:46.967050 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:33:47.076890 master-0 kubenswrapper[33141]: I0308 03:33:47.076825 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 03:33:47.132432 master-0 kubenswrapper[33141]: I0308 03:33:47.132403 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 03:33:47.182042 master-0 kubenswrapper[33141]: I0308 03:33:47.181965 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mw5z6" Mar 08 03:33:47.224334 master-0 kubenswrapper[33141]: I0308 03:33:47.224149 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:33:47.243139 master-0 kubenswrapper[33141]: I0308 03:33:47.243081 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 03:33:47.273295 master-0 kubenswrapper[33141]: I0308 03:33:47.273224 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 03:33:47.275358 master-0 kubenswrapper[33141]: I0308 03:33:47.275308 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 03:33:47.380863 master-0 kubenswrapper[33141]: I0308 03:33:47.380772 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 03:33:47.395846 master-0 kubenswrapper[33141]: I0308 03:33:47.395762 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 03:33:47.401352 master-0 kubenswrapper[33141]: I0308 03:33:47.401308 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:33:47.506756 master-0 kubenswrapper[33141]: I0308 03:33:47.506677 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 03:33:47.544432 master-0 kubenswrapper[33141]: I0308 03:33:47.544343 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 03:33:47.584826 master-0 kubenswrapper[33141]: I0308 03:33:47.584704 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 03:33:47.590098 master-0 kubenswrapper[33141]: I0308 03:33:47.589831 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 03:33:47.598518 master-0 kubenswrapper[33141]: I0308 03:33:47.598320 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 03:33:47.730719 master-0 kubenswrapper[33141]: I0308 03:33:47.730635 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 03:33:47.768422 master-0 kubenswrapper[33141]: I0308 03:33:47.768280 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fvhvd" Mar 08 03:33:47.942934 master-0 kubenswrapper[33141]: I0308 03:33:47.942854 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 03:33:47.984402 master-0 kubenswrapper[33141]: I0308 03:33:47.984343 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 03:33:47.996933 master-0 kubenswrapper[33141]: I0308 03:33:47.996826 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 03:33:48.040684 master-0 kubenswrapper[33141]: I0308 03:33:48.040528 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 03:33:48.081410 master-0 kubenswrapper[33141]: I0308 03:33:48.081317 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-mnlxs" event={"ID":"ffa263f5-3916-48bc-80f1-3f5aad28c9f9","Type":"ContainerStarted","Data":"51108ffd04c53ba2cfab7b8a7aed6477df7cd083a56ce0ca5515701a74130be6"} Mar 08 03:33:48.084386 master-0 kubenswrapper[33141]: I0308 03:33:48.084334 33141 patch_prober.go:28] interesting pod/downloads-84f57b9877-mnlxs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 08 03:33:48.084551 master-0 kubenswrapper[33141]: I0308 03:33:48.084402 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-mnlxs" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 08 03:33:48.084745 master-0 kubenswrapper[33141]: I0308 03:33:48.084702 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:33:48.201651 master-0 kubenswrapper[33141]: I0308 03:33:48.201582 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 03:33:48.232217 master-0 kubenswrapper[33141]: I0308 03:33:48.232131 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 03:33:48.237425 master-0 kubenswrapper[33141]: I0308 03:33:48.237372 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-wvdjh" Mar 08 03:33:48.257415 master-0 kubenswrapper[33141]: I0308 03:33:48.257320 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nqkx9" Mar 08 03:33:48.311358 master-0 kubenswrapper[33141]: I0308 03:33:48.311160 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 03:33:48.313088 master-0 kubenswrapper[33141]: I0308 03:33:48.313037 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lf8gs" Mar 08 03:33:48.316623 master-0 kubenswrapper[33141]: I0308 03:33:48.316563 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 03:33:48.322455 master-0 kubenswrapper[33141]: I0308 03:33:48.322390 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 03:33:48.451889 master-0 kubenswrapper[33141]: I0308 03:33:48.451798 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 03:33:48.454140 master-0 kubenswrapper[33141]: I0308 03:33:48.454044 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkrb" Mar 08 03:33:48.454419 master-0 kubenswrapper[33141]: I0308 03:33:48.454314 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 03:33:48.463819 master-0 kubenswrapper[33141]: I0308 03:33:48.463732 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-gqqgx" Mar 08 03:33:48.466387 master-0 kubenswrapper[33141]: I0308 03:33:48.466308 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 03:33:48.537521 master-0 kubenswrapper[33141]: I0308 03:33:48.537428 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 03:33:48.557293 master-0 kubenswrapper[33141]: I0308 03:33:48.557187 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 03:33:48.566093 master-0 kubenswrapper[33141]: I0308 03:33:48.565971 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 03:33:48.592408 master-0 kubenswrapper[33141]: I0308 03:33:48.592319 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 03:33:48.741767 master-0 kubenswrapper[33141]: I0308 03:33:48.739891 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 03:33:48.795719 master-0 kubenswrapper[33141]: I0308 03:33:48.794463 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 03:33:48.803655 master-0 kubenswrapper[33141]: I0308 03:33:48.803503 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 03:33:48.836127 master-0 kubenswrapper[33141]: I0308 03:33:48.835995 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 03:33:48.837776 master-0 kubenswrapper[33141]: I0308 03:33:48.837541 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 03:33:48.865450 master-0 kubenswrapper[33141]: I0308 03:33:48.865375 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 03:33:48.865694 master-0 kubenswrapper[33141]: I0308 03:33:48.865503 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 03:33:48.882760 master-0 kubenswrapper[33141]: I0308 03:33:48.882687 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 03:33:49.027098 master-0 kubenswrapper[33141]: I0308 03:33:49.026525 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 03:33:49.089573 master-0 kubenswrapper[33141]: I0308 03:33:49.089432 33141 patch_prober.go:28] interesting pod/downloads-84f57b9877-mnlxs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 08 03:33:49.089573 master-0 kubenswrapper[33141]: I0308 03:33:49.089498 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-mnlxs" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 08 03:33:49.090101 master-0 kubenswrapper[33141]: I0308 03:33:49.089803 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 03:33:49.096779 master-0 kubenswrapper[33141]: I0308 03:33:49.096738 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 03:33:49.108006 master-0 kubenswrapper[33141]: I0308 03:33:49.107889 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 03:33:49.246658 master-0 kubenswrapper[33141]: I0308 03:33:49.246608 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 03:33:49.443562 master-0 kubenswrapper[33141]: I0308 03:33:49.443376 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 03:33:49.460403 master-0 kubenswrapper[33141]: I0308 03:33:49.460325 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:33:49.496356 master-0 kubenswrapper[33141]: I0308 03:33:49.496294 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 03:33:49.575866 master-0 kubenswrapper[33141]: I0308 03:33:49.575824 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 03:33:49.584100 master-0 kubenswrapper[33141]: I0308 03:33:49.584078 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h4sjt" Mar 08 03:33:49.613859 master-0 kubenswrapper[33141]: I0308 03:33:49.613778 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:33:49.630669 master-0 kubenswrapper[33141]: I0308 03:33:49.630592 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 03:33:49.679292 master-0 kubenswrapper[33141]: I0308 03:33:49.679212 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 03:33:49.681074 master-0 kubenswrapper[33141]: I0308 03:33:49.681028 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 03:33:49.761111 master-0 kubenswrapper[33141]: I0308 03:33:49.761041 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 03:33:49.766616 master-0 kubenswrapper[33141]: I0308 03:33:49.766573 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 03:33:49.796499 master-0 kubenswrapper[33141]: I0308 03:33:49.796418 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 03:33:49.851334 master-0 kubenswrapper[33141]: I0308 03:33:49.851253 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 03:33:49.908387 master-0 kubenswrapper[33141]: I0308 03:33:49.908304 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:33:49.940302 master-0 kubenswrapper[33141]: I0308 03:33:49.940231 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:33:49.963664 master-0 kubenswrapper[33141]: I0308 03:33:49.963609 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 03:33:49.978651 master-0 kubenswrapper[33141]: I0308 03:33:49.978579 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 03:33:50.068882 master-0 kubenswrapper[33141]: I0308 03:33:50.068616 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 03:33:50.098664 master-0 kubenswrapper[33141]: I0308 03:33:50.098573 33141 patch_prober.go:28] interesting pod/downloads-84f57b9877-mnlxs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 08 03:33:50.098936 master-0 kubenswrapper[33141]: I0308 03:33:50.098665 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-mnlxs" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 08 03:33:50.306156 master-0 kubenswrapper[33141]: I0308 03:33:50.306059 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 03:33:50.319508 master-0 kubenswrapper[33141]: I0308 03:33:50.319355 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 03:33:50.330674 master-0 kubenswrapper[33141]: I0308 03:33:50.330593 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 03:33:50.365555 master-0 kubenswrapper[33141]: I0308 03:33:50.365487 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 03:33:50.393170 master-0 kubenswrapper[33141]: I0308 03:33:50.393112 33141 patch_prober.go:28] interesting pod/downloads-84f57b9877-mnlxs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 08 03:33:50.393531 master-0 kubenswrapper[33141]: I0308 03:33:50.393478 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-mnlxs" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 08 03:33:50.393826 master-0 kubenswrapper[33141]: I0308 03:33:50.393755 33141 patch_prober.go:28] interesting pod/downloads-84f57b9877-mnlxs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" start-of-body= Mar 08 03:33:50.394016 master-0 kubenswrapper[33141]: I0308 03:33:50.393881 33141 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-mnlxs" podUID="ffa263f5-3916-48bc-80f1-3f5aad28c9f9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.95:8080/\": dial tcp 10.128.0.95:8080: connect: connection refused" Mar 08 03:33:50.436471 master-0 kubenswrapper[33141]: I0308 03:33:50.436412 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 03:33:50.484264 master-0 kubenswrapper[33141]: I0308 03:33:50.484173 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:33:50.626043 master-0 kubenswrapper[33141]: I0308 03:33:50.624711 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 03:33:50.641200 master-0 kubenswrapper[33141]: I0308 03:33:50.641148 33141 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 03:33:50.643655 master-0 kubenswrapper[33141]: I0308 03:33:50.643606 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 03:33:50.675314 master-0 kubenswrapper[33141]: I0308 03:33:50.675239 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-ftthh" Mar 08 03:33:50.686267 master-0 kubenswrapper[33141]: I0308 03:33:50.686209 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 03:33:50.720103 master-0 kubenswrapper[33141]: I0308 03:33:50.720047 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 03:33:50.771575 master-0 kubenswrapper[33141]: I0308 03:33:50.768127 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 03:33:50.830695 master-0 kubenswrapper[33141]: I0308 03:33:50.830613 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 03:33:50.836179 master-0 kubenswrapper[33141]: I0308 03:33:50.835567 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 03:33:50.857158 master-0 kubenswrapper[33141]: I0308 03:33:50.857080 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7gb49" Mar 08 03:33:50.903899 master-0 kubenswrapper[33141]: I0308 03:33:50.903736 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 03:33:50.955501 master-0 kubenswrapper[33141]: I0308 03:33:50.955441 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 03:33:50.969024 master-0 kubenswrapper[33141]: I0308 03:33:50.968960 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 03:33:50.969708 master-0 kubenswrapper[33141]: I0308 03:33:50.969668 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:33:50.991933 master-0 kubenswrapper[33141]: I0308 03:33:50.991840 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 03:33:51.034829 master-0 kubenswrapper[33141]: I0308 03:33:51.034767 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:33:51.075492 master-0 kubenswrapper[33141]: I0308 03:33:51.075449 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 03:33:51.118713 master-0 kubenswrapper[33141]: I0308 03:33:51.118638 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 03:33:51.212003 master-0 kubenswrapper[33141]: I0308 03:33:51.211843 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:33:51.292696 master-0 kubenswrapper[33141]: I0308 03:33:51.292589 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 03:33:51.355677 master-0 kubenswrapper[33141]: I0308 03:33:51.354353 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 03:33:51.362070 master-0 kubenswrapper[33141]: I0308 03:33:51.360005 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 03:33:51.399091 master-0 kubenswrapper[33141]: I0308 03:33:51.397279 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 03:33:51.399423 master-0 kubenswrapper[33141]: I0308 03:33:51.399299 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 03:33:51.426977 master-0 kubenswrapper[33141]: I0308 03:33:51.419250 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 03:33:51.475147 master-0 kubenswrapper[33141]: I0308 03:33:51.474996 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 03:33:51.598436 master-0 kubenswrapper[33141]: I0308 03:33:51.598360 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 03:33:51.607068 master-0 kubenswrapper[33141]: I0308 03:33:51.607021 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 03:33:51.664260 master-0 kubenswrapper[33141]: I0308 03:33:51.664200 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-g676s" Mar 08 03:33:51.665853 master-0 kubenswrapper[33141]: I0308 03:33:51.665802 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 03:33:51.671174 master-0 kubenswrapper[33141]: I0308 03:33:51.671137 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-bhtmv" Mar 08 03:33:51.708044 master-0 kubenswrapper[33141]: I0308 03:33:51.707990 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:33:51.926715 master-0 kubenswrapper[33141]: I0308 03:33:51.926635 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 03:33:51.980094 master-0 kubenswrapper[33141]: I0308 03:33:51.980024 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 03:33:52.066658 master-0 kubenswrapper[33141]: I0308 03:33:52.066529 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 03:33:52.069192 master-0 kubenswrapper[33141]: I0308 03:33:52.069123 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 03:33:52.081501 master-0 kubenswrapper[33141]: I0308 03:33:52.081435 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 03:33:52.136286 master-0 kubenswrapper[33141]: I0308 03:33:52.129932 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:33:52.138045 master-0 kubenswrapper[33141]: I0308 03:33:52.137968 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 03:33:52.140421 master-0 kubenswrapper[33141]: I0308 03:33:52.140348 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 03:33:52.144580 master-0 kubenswrapper[33141]: I0308 03:33:52.144519 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 03:33:52.170362 master-0 kubenswrapper[33141]: I0308 03:33:52.168218 33141 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 03:33:52.170624 master-0 kubenswrapper[33141]: I0308 03:33:52.170220 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-5cb97dd5fc-g7fqr" podStartSLOduration=156.818731128 podStartE2EDuration="2m42.170199494s" podCreationTimestamp="2026-03-08 03:31:10 +0000 UTC" firstStartedPulling="2026-03-08 03:33:05.549093236 +0000 UTC m=+99.418986429" lastFinishedPulling="2026-03-08 03:33:10.900561582 +0000 UTC m=+104.770454795" observedRunningTime="2026-03-08 03:33:30.627364862 +0000 UTC m=+124.497258075" watchObservedRunningTime="2026-03-08 03:33:52.170199494 +0000 UTC m=+146.040092717" Mar 08 03:33:52.177137 master-0 kubenswrapper[33141]: I0308 03:33:52.176975 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=42.176954331 podStartE2EDuration="42.176954331s" podCreationTimestamp="2026-03-08 03:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:33:30.63876844 +0000 UTC m=+124.508661633" watchObservedRunningTime="2026-03-08 03:33:52.176954331 +0000 UTC m=+146.046847564" Mar 08 03:33:52.177445 master-0 kubenswrapper[33141]: I0308 03:33:52.177379 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-mnlxs" podStartSLOduration=7.126821235 podStartE2EDuration="43.177369341s" podCreationTimestamp="2026-03-08 03:33:09 +0000 UTC" firstStartedPulling="2026-03-08 03:33:10.901482446 +0000 UTC m=+104.771375639" lastFinishedPulling="2026-03-08 03:33:46.952030552 +0000 UTC m=+140.821923745" observedRunningTime="2026-03-08 03:33:48.2482476 +0000 UTC m=+142.118140823" watchObservedRunningTime="2026-03-08 03:33:52.177369341 +0000 UTC m=+146.047262564" Mar 08 03:33:52.178494 master-0 kubenswrapper[33141]: I0308 03:33:52.178444 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:33:52.178600 master-0 kubenswrapper[33141]: I0308 03:33:52.178513 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:33:52.188098 master-0 kubenswrapper[33141]: I0308 03:33:52.186880 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:33:52.191536 master-0 kubenswrapper[33141]: I0308 03:33:52.191465 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 03:33:52.210968 master-0 kubenswrapper[33141]: I0308 03:33:52.210898 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-fm6df" Mar 08 03:33:52.366678 master-0 kubenswrapper[33141]: I0308 03:33:52.366554 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=22.366534306 podStartE2EDuration="22.366534306s" podCreationTimestamp="2026-03-08 03:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:33:52.36439679 +0000 UTC m=+146.234290013" watchObservedRunningTime="2026-03-08 03:33:52.366534306 +0000 UTC m=+146.236427499" Mar 08 03:33:52.385157 master-0 kubenswrapper[33141]: I0308 03:33:52.385089 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 03:33:52.396385 master-0 kubenswrapper[33141]: I0308 03:33:52.396329 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 03:33:52.438035 master-0 kubenswrapper[33141]: I0308 03:33:52.437885 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 03:33:52.451370 master-0 kubenswrapper[33141]: I0308 03:33:52.451323 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 03:33:52.520389 master-0 kubenswrapper[33141]: I0308 03:33:52.520333 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 03:33:52.547756 master-0 kubenswrapper[33141]: I0308 03:33:52.547685 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 03:33:52.556983 master-0 kubenswrapper[33141]: I0308 03:33:52.556886 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 03:33:52.562380 master-0 kubenswrapper[33141]: I0308 03:33:52.562332 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 03:33:52.614570 master-0 kubenswrapper[33141]: I0308 03:33:52.614488 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 03:33:52.663826 master-0 kubenswrapper[33141]: I0308 03:33:52.663738 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 03:33:52.753785 master-0 kubenswrapper[33141]: I0308 03:33:52.753606 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 03:33:52.778406 master-0 kubenswrapper[33141]: I0308 03:33:52.778339 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:33:52.795460 master-0 kubenswrapper[33141]: I0308 03:33:52.795399 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dqqnp" Mar 08 03:33:52.818103 master-0 kubenswrapper[33141]: I0308 03:33:52.817997 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 03:33:52.818334 master-0 kubenswrapper[33141]: I0308 03:33:52.818222 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 03:33:52.933411 master-0 kubenswrapper[33141]: I0308 03:33:52.933328 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 03:33:52.955503 master-0 kubenswrapper[33141]: I0308 03:33:52.954895 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 03:33:52.999442 master-0 kubenswrapper[33141]: I0308 03:33:52.997900 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-pz6cl" Mar 08 03:33:53.009526 master-0 kubenswrapper[33141]: I0308 03:33:53.009462 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 03:33:53.405360 master-0 kubenswrapper[33141]: I0308 03:33:53.405176 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 03:33:53.414505 master-0 kubenswrapper[33141]: I0308 03:33:53.414421 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 03:33:53.431324 master-0 kubenswrapper[33141]: I0308 03:33:53.431230 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-46c6c" Mar 08 03:33:53.477778 master-0 kubenswrapper[33141]: I0308 03:33:53.477672 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:33:53.478999 master-0 kubenswrapper[33141]: I0308 03:33:53.478924 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" containerID="cri-o://192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a" gracePeriod=5 Mar 08 03:33:53.490890 master-0 kubenswrapper[33141]: I0308 03:33:53.490839 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 03:33:53.581464 master-0 kubenswrapper[33141]: I0308 03:33:53.581388 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 03:33:53.590066 master-0 kubenswrapper[33141]: I0308 03:33:53.590001 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 03:33:53.593697 master-0 kubenswrapper[33141]: I0308 03:33:53.593654 33141 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 03:33:53.773028 master-0 kubenswrapper[33141]: I0308 03:33:53.772879 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 03:33:53.900373 master-0 kubenswrapper[33141]: I0308 03:33:53.900306 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 03:33:53.901077 master-0 kubenswrapper[33141]: I0308 03:33:53.900884 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:33:53.961983 master-0 kubenswrapper[33141]: I0308 03:33:53.961880 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 03:33:53.972627 master-0 kubenswrapper[33141]: I0308 03:33:53.972529 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 03:33:53.989817 master-0 kubenswrapper[33141]: I0308 03:33:53.989724 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 03:33:54.022934 master-0 kubenswrapper[33141]: I0308 03:33:54.022844 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 03:33:54.061072 master-0 kubenswrapper[33141]: I0308 03:33:54.060876 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 03:33:54.081387 master-0 kubenswrapper[33141]: I0308 03:33:54.081324 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 03:33:54.123654 master-0 kubenswrapper[33141]: I0308 03:33:54.123573 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 03:33:54.134816 master-0 kubenswrapper[33141]: I0308 03:33:54.134745 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 03:33:54.215575 master-0 kubenswrapper[33141]: I0308 03:33:54.215507 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-d6gwq" Mar 08 03:33:54.236399 master-0 kubenswrapper[33141]: I0308 03:33:54.236351 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 03:33:54.461917 master-0 kubenswrapper[33141]: I0308 03:33:54.461786 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 03:33:54.565622 master-0 kubenswrapper[33141]: I0308 03:33:54.565553 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 03:33:54.620612 master-0 kubenswrapper[33141]: I0308 03:33:54.620526 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 03:33:54.654632 master-0 kubenswrapper[33141]: I0308 03:33:54.654566 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 03:33:54.709502 master-0 kubenswrapper[33141]: I0308 03:33:54.709430 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 03:33:54.855402 master-0 kubenswrapper[33141]: I0308 03:33:54.855302 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 03:33:54.919995 master-0 kubenswrapper[33141]: I0308 03:33:54.919899 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 03:33:54.939956 master-0 kubenswrapper[33141]: I0308 03:33:54.939870 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 03:33:55.162989 master-0 kubenswrapper[33141]: I0308 03:33:55.162800 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 03:33:55.228932 master-0 kubenswrapper[33141]: I0308 03:33:55.228837 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 03:33:55.357737 master-0 kubenswrapper[33141]: I0308 03:33:55.357653 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 03:33:55.410884 master-0 kubenswrapper[33141]: I0308 03:33:55.410783 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 03:33:55.491274 master-0 kubenswrapper[33141]: I0308 03:33:55.491124 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:33:55.493654 master-0 kubenswrapper[33141]: I0308 03:33:55.493598 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-vzsqv" Mar 08 03:33:55.571324 master-0 kubenswrapper[33141]: I0308 03:33:55.571235 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 03:33:55.593325 master-0 kubenswrapper[33141]: I0308 03:33:55.593270 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 03:33:55.665992 master-0 kubenswrapper[33141]: I0308 03:33:55.665919 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 03:33:55.774305 master-0 kubenswrapper[33141]: I0308 03:33:55.774227 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 03:33:55.833149 master-0 kubenswrapper[33141]: I0308 03:33:55.833064 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-9gswq" Mar 08 03:33:55.881392 master-0 kubenswrapper[33141]: I0308 03:33:55.881310 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 03:33:56.220198 master-0 kubenswrapper[33141]: I0308 03:33:56.220029 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 03:33:56.456393 master-0 kubenswrapper[33141]: I0308 03:33:56.456306 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vbs7r" Mar 08 03:33:56.517215 master-0 kubenswrapper[33141]: I0308 03:33:56.517139 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 03:33:56.580036 master-0 kubenswrapper[33141]: I0308 03:33:56.579958 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 03:33:56.610291 master-0 kubenswrapper[33141]: I0308 03:33:56.610175 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 03:33:56.856592 master-0 kubenswrapper[33141]: I0308 03:33:56.856402 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 08 03:33:56.926086 master-0 kubenswrapper[33141]: I0308 03:33:56.925986 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-qnnnr" Mar 08 03:33:56.939956 master-0 kubenswrapper[33141]: I0308 03:33:56.939430 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-rsc8q" Mar 08 03:33:57.192261 master-0 kubenswrapper[33141]: I0308 03:33:57.192066 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 03:33:57.284217 master-0 kubenswrapper[33141]: I0308 03:33:57.284087 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 03:33:57.665474 master-0 kubenswrapper[33141]: I0308 03:33:57.665402 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 03:33:58.039434 master-0 kubenswrapper[33141]: I0308 03:33:58.039342 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 03:33:58.122411 master-0 kubenswrapper[33141]: I0308 03:33:58.122334 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 03:33:58.372279 master-0 kubenswrapper[33141]: I0308 03:33:58.372152 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 03:33:58.508801 master-0 kubenswrapper[33141]: I0308 03:33:58.508703 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 03:33:58.950298 master-0 kubenswrapper[33141]: I0308 03:33:58.950219 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 03:33:59.097167 master-0 kubenswrapper[33141]: I0308 03:33:59.096995 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 08 03:33:59.097167 master-0 kubenswrapper[33141]: I0308 03:33:59.097114 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:59.173760 master-0 kubenswrapper[33141]: I0308 03:33:59.173702 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 08 03:33:59.173963 master-0 kubenswrapper[33141]: I0308 03:33:59.173784 33141 generic.go:334] "Generic (PLEG): container finished" podID="a814bd60de133d95cf99630a978c017e" containerID="192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a" exitCode=137 Mar 08 03:33:59.173963 master-0 kubenswrapper[33141]: I0308 03:33:59.173844 33141 scope.go:117] "RemoveContainer" containerID="192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a" Mar 08 03:33:59.174234 master-0 kubenswrapper[33141]: I0308 03:33:59.173998 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:33:59.174863 master-0 kubenswrapper[33141]: I0308 03:33:59.174822 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 08 03:33:59.174950 master-0 kubenswrapper[33141]: I0308 03:33:59.174925 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 08 03:33:59.175063 master-0 kubenswrapper[33141]: I0308 03:33:59.174960 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 08 03:33:59.175063 master-0 kubenswrapper[33141]: I0308 03:33:59.174978 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 08 03:33:59.175063 master-0 kubenswrapper[33141]: I0308 03:33:59.174975 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests" (OuterVolumeSpecName: "manifests") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:59.175063 master-0 kubenswrapper[33141]: I0308 03:33:59.175029 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:59.175196 master-0 kubenswrapper[33141]: I0308 03:33:59.175053 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 08 03:33:59.175232 master-0 kubenswrapper[33141]: I0308 03:33:59.175078 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:59.175262 master-0 kubenswrapper[33141]: I0308 03:33:59.175084 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log" (OuterVolumeSpecName: "var-log") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:59.175727 master-0 kubenswrapper[33141]: I0308 03:33:59.175682 33141 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:59.175773 master-0 kubenswrapper[33141]: I0308 03:33:59.175730 33141 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:59.175773 master-0 kubenswrapper[33141]: I0308 03:33:59.175753 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:59.175835 master-0 kubenswrapper[33141]: I0308 03:33:59.175776 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:59.183455 master-0 kubenswrapper[33141]: I0308 03:33:59.183364 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:33:59.227371 master-0 kubenswrapper[33141]: I0308 03:33:59.227324 33141 scope.go:117] "RemoveContainer" containerID="192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a" Mar 08 03:33:59.227854 master-0 kubenswrapper[33141]: E0308 03:33:59.227801 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a\": container with ID starting with 192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a not found: ID does not exist" containerID="192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a" Mar 08 03:33:59.228019 master-0 kubenswrapper[33141]: I0308 03:33:59.227984 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a"} err="failed to get container status \"192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a\": rpc error: code = NotFound desc = could not find container \"192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a\": container with ID starting with 192ccc130389c9b395ee876a542eccf82e039d4231c1b9164273ea629c05ea3a not found: ID does not exist" Mar 08 03:33:59.276641 master-0 kubenswrapper[33141]: I0308 03:33:59.276605 33141 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:33:59.531885 master-0 kubenswrapper[33141]: I0308 03:33:59.531776 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 03:34:00.364396 master-0 kubenswrapper[33141]: I0308 03:34:00.364287 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814bd60de133d95cf99630a978c017e" path="/var/lib/kubelet/pods/a814bd60de133d95cf99630a978c017e/volumes" Mar 08 03:34:00.365226 master-0 kubenswrapper[33141]: I0308 03:34:00.364773 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 08 03:34:00.387421 master-0 kubenswrapper[33141]: I0308 03:34:00.387295 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:34:00.387421 master-0 kubenswrapper[33141]: I0308 03:34:00.387361 33141 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="1e73cfd4-fd52-4593-b83a-8d577e0cc563" Mar 08 03:34:00.402813 master-0 kubenswrapper[33141]: I0308 03:34:00.402723 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-mnlxs" Mar 08 03:34:00.403288 master-0 kubenswrapper[33141]: I0308 03:34:00.403214 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:34:00.403425 master-0 kubenswrapper[33141]: I0308 03:34:00.403283 33141 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="1e73cfd4-fd52-4593-b83a-8d577e0cc563" Mar 08 03:34:00.665266 master-0 kubenswrapper[33141]: I0308 03:34:00.665103 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 03:34:06.819376 master-0 kubenswrapper[33141]: I0308 03:34:06.819287 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fd44b487d-l5wc7"] Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: E0308 03:34:06.819694 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" containerName="installer" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: I0308 03:34:06.819720 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" containerName="installer" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: E0308 03:34:06.819762 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: I0308 03:34:06.819775 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: I0308 03:34:06.820114 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f958554-d0e0-4a2d-84e8-17e20ae7625c" containerName="installer" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: I0308 03:34:06.820179 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 08 03:34:06.822502 master-0 kubenswrapper[33141]: I0308 03:34:06.820893 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.822874 master-0 kubenswrapper[33141]: I0308 03:34:06.822667 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 08 03:34:06.823490 master-0 kubenswrapper[33141]: I0308 03:34:06.823442 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-9h99f" Mar 08 03:34:06.823600 master-0 kubenswrapper[33141]: I0308 03:34:06.823560 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 08 03:34:06.823662 master-0 kubenswrapper[33141]: I0308 03:34:06.823645 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 08 03:34:06.823713 master-0 kubenswrapper[33141]: I0308 03:34:06.823569 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 08 03:34:06.823713 master-0 kubenswrapper[33141]: I0308 03:34:06.823651 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 08 03:34:06.834190 master-0 kubenswrapper[33141]: I0308 03:34:06.834129 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fd44b487d-l5wc7"] Mar 08 03:34:06.900302 master-0 kubenswrapper[33141]: I0308 03:34:06.900210 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.900554 master-0 kubenswrapper[33141]: I0308 03:34:06.900412 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2nxt\" (UniqueName: \"kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.900554 master-0 kubenswrapper[33141]: I0308 03:34:06.900470 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.900554 master-0 kubenswrapper[33141]: I0308 03:34:06.900494 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.901433 master-0 kubenswrapper[33141]: I0308 03:34:06.900646 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:06.901433 master-0 kubenswrapper[33141]: I0308 03:34:06.900710 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002102 master-0 kubenswrapper[33141]: I0308 03:34:07.002020 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002294 master-0 kubenswrapper[33141]: I0308 03:34:07.002178 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2nxt\" (UniqueName: \"kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002294 master-0 kubenswrapper[33141]: I0308 03:34:07.002249 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002538 master-0 kubenswrapper[33141]: I0308 03:34:07.002490 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002629 master-0 kubenswrapper[33141]: E0308 03:34:07.002591 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:07.002670 master-0 kubenswrapper[33141]: I0308 03:34:07.002626 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002700 master-0 kubenswrapper[33141]: I0308 03:34:07.002665 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.002740 master-0 kubenswrapper[33141]: E0308 03:34:07.002706 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:07.502672924 +0000 UTC m=+161.372566157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:07.003087 master-0 kubenswrapper[33141]: I0308 03:34:07.003039 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.003661 master-0 kubenswrapper[33141]: I0308 03:34:07.003628 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.003795 master-0 kubenswrapper[33141]: I0308 03:34:07.003768 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.006112 master-0 kubenswrapper[33141]: I0308 03:34:07.006059 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.023865 master-0 kubenswrapper[33141]: I0308 03:34:07.023812 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2nxt\" (UniqueName: \"kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.512709 master-0 kubenswrapper[33141]: I0308 03:34:07.512609 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:07.513022 master-0 kubenswrapper[33141]: E0308 03:34:07.512952 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:07.513115 master-0 kubenswrapper[33141]: E0308 03:34:07.513076 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:08.513018313 +0000 UTC m=+162.382911516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:08.528157 master-0 kubenswrapper[33141]: I0308 03:34:08.528060 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:08.529500 master-0 kubenswrapper[33141]: E0308 03:34:08.528328 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:08.529500 master-0 kubenswrapper[33141]: E0308 03:34:08.528407 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:10.52838152 +0000 UTC m=+164.398274753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:10.592379 master-0 kubenswrapper[33141]: I0308 03:34:10.592251 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:10.593386 master-0 kubenswrapper[33141]: E0308 03:34:10.592544 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:10.593386 master-0 kubenswrapper[33141]: E0308 03:34:10.592725 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:14.592683353 +0000 UTC m=+168.462576586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:14.675747 master-0 kubenswrapper[33141]: I0308 03:34:14.675516 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:14.676856 master-0 kubenswrapper[33141]: E0308 03:34:14.675665 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:14.676856 master-0 kubenswrapper[33141]: E0308 03:34:14.675869 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:22.675846731 +0000 UTC m=+176.545739924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:22.695241 master-0 kubenswrapper[33141]: I0308 03:34:22.695159 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:22.696469 master-0 kubenswrapper[33141]: E0308 03:34:22.695338 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:22.696469 master-0 kubenswrapper[33141]: E0308 03:34:22.695416 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:34:38.695398459 +0000 UTC m=+192.565291652 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:34:30.148140 master-0 kubenswrapper[33141]: I0308 03:34:30.148071 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-ztkll"] Mar 08 03:34:30.149324 master-0 kubenswrapper[33141]: I0308 03:34:30.149291 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.152081 master-0 kubenswrapper[33141]: I0308 03:34:30.152028 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 03:34:30.152434 master-0 kubenswrapper[33141]: I0308 03:34:30.152384 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-pf6c2" Mar 08 03:34:30.323011 master-0 kubenswrapper[33141]: I0308 03:34:30.322868 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8167c401-b19d-4215-9022-d299696fcb2f-host\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.323011 master-0 kubenswrapper[33141]: I0308 03:34:30.323009 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8167c401-b19d-4215-9022-d299696fcb2f-serviceca\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.323317 master-0 kubenswrapper[33141]: I0308 03:34:30.323217 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtxcp\" (UniqueName: \"kubernetes.io/projected/8167c401-b19d-4215-9022-d299696fcb2f-kube-api-access-qtxcp\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.424863 master-0 kubenswrapper[33141]: I0308 03:34:30.424699 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8167c401-b19d-4215-9022-d299696fcb2f-host\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.425257 master-0 kubenswrapper[33141]: I0308 03:34:30.425219 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8167c401-b19d-4215-9022-d299696fcb2f-serviceca\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.425548 master-0 kubenswrapper[33141]: I0308 03:34:30.424843 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8167c401-b19d-4215-9022-d299696fcb2f-host\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.425742 master-0 kubenswrapper[33141]: I0308 03:34:30.425707 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtxcp\" (UniqueName: \"kubernetes.io/projected/8167c401-b19d-4215-9022-d299696fcb2f-kube-api-access-qtxcp\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.425938 master-0 kubenswrapper[33141]: I0308 03:34:30.425865 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8167c401-b19d-4215-9022-d299696fcb2f-serviceca\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.447447 master-0 kubenswrapper[33141]: I0308 03:34:30.447415 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtxcp\" (UniqueName: \"kubernetes.io/projected/8167c401-b19d-4215-9022-d299696fcb2f-kube-api-access-qtxcp\") pod \"node-ca-ztkll\" (UID: \"8167c401-b19d-4215-9022-d299696fcb2f\") " pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.491146 master-0 kubenswrapper[33141]: I0308 03:34:30.491088 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-ztkll" Mar 08 03:34:30.512375 master-0 kubenswrapper[33141]: W0308 03:34:30.512251 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8167c401_b19d_4215_9022_d299696fcb2f.slice/crio-aa5787a0fb49bb1b3d12cc5a03dda9f0d32327b6f27bb260d3d528b28736b9c3 WatchSource:0}: Error finding container aa5787a0fb49bb1b3d12cc5a03dda9f0d32327b6f27bb260d3d528b28736b9c3: Status 404 returned error can't find the container with id aa5787a0fb49bb1b3d12cc5a03dda9f0d32327b6f27bb260d3d528b28736b9c3 Mar 08 03:34:31.464990 master-0 kubenswrapper[33141]: I0308 03:34:31.464928 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ztkll" event={"ID":"8167c401-b19d-4215-9022-d299696fcb2f","Type":"ContainerStarted","Data":"aa5787a0fb49bb1b3d12cc5a03dda9f0d32327b6f27bb260d3d528b28736b9c3"} Mar 08 03:34:33.481730 master-0 kubenswrapper[33141]: I0308 03:34:33.481670 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-ztkll" event={"ID":"8167c401-b19d-4215-9022-d299696fcb2f","Type":"ContainerStarted","Data":"7f6adf562fab6a58c99f7113bb884cafc17634424f1737d5bfe3eb9866eccc4d"} Mar 08 03:34:38.759645 master-0 kubenswrapper[33141]: I0308 03:34:38.759530 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:34:38.760719 master-0 kubenswrapper[33141]: E0308 03:34:38.759804 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:34:38.760719 master-0 kubenswrapper[33141]: E0308 03:34:38.759978 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:35:10.759947467 +0000 UTC m=+224.629840670 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:35:08.662865 master-0 kubenswrapper[33141]: I0308 03:35:08.662698 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-ztkll" podStartSLOduration=36.583294208 podStartE2EDuration="38.662672164s" podCreationTimestamp="2026-03-08 03:34:30 +0000 UTC" firstStartedPulling="2026-03-08 03:34:30.514575434 +0000 UTC m=+184.384468637" lastFinishedPulling="2026-03-08 03:34:32.5939534 +0000 UTC m=+186.463846593" observedRunningTime="2026-03-08 03:34:33.506548043 +0000 UTC m=+187.376441276" watchObservedRunningTime="2026-03-08 03:35:08.662672164 +0000 UTC m=+222.532565397" Mar 08 03:35:08.667476 master-0 kubenswrapper[33141]: I0308 03:35:08.666938 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:35:08.668557 master-0 kubenswrapper[33141]: I0308 03:35:08.668497 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.682300 master-0 kubenswrapper[33141]: I0308 03:35:08.682234 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 08 03:35:08.704397 master-0 kubenswrapper[33141]: I0308 03:35:08.704320 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:35:08.755692 master-0 kubenswrapper[33141]: I0308 03:35:08.755352 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.755692 master-0 kubenswrapper[33141]: I0308 03:35:08.755423 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.755692 master-0 kubenswrapper[33141]: I0308 03:35:08.755474 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.755692 master-0 kubenswrapper[33141]: I0308 03:35:08.755509 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdqpx\" (UniqueName: \"kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.755692 master-0 kubenswrapper[33141]: I0308 03:35:08.755665 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.756136 master-0 kubenswrapper[33141]: I0308 03:35:08.755733 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.756136 master-0 kubenswrapper[33141]: I0308 03:35:08.755838 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857382 master-0 kubenswrapper[33141]: I0308 03:35:08.857317 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857435 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857453 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857475 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857495 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdqpx\" (UniqueName: \"kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857518 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.857626 master-0 kubenswrapper[33141]: I0308 03:35:08.857531 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.858813 master-0 kubenswrapper[33141]: E0308 03:35:08.858333 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:08.858813 master-0 kubenswrapper[33141]: E0308 03:35:08.858504 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert podName:04802a97-e959-423f-8ca7-4a8fb5e7e047 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:09.358469711 +0000 UTC m=+223.228362944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert") pod "console-748f76c866-99l2l" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047") : secret "console-serving-cert" not found Mar 08 03:35:08.859037 master-0 kubenswrapper[33141]: I0308 03:35:08.858977 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.859204 master-0 kubenswrapper[33141]: I0308 03:35:08.859155 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.859468 master-0 kubenswrapper[33141]: I0308 03:35:08.859427 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.860498 master-0 kubenswrapper[33141]: I0308 03:35:08.860438 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.864973 master-0 kubenswrapper[33141]: I0308 03:35:08.864356 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:08.892740 master-0 kubenswrapper[33141]: I0308 03:35:08.892653 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdqpx\" (UniqueName: \"kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:09.366340 master-0 kubenswrapper[33141]: I0308 03:35:09.366244 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:09.366598 master-0 kubenswrapper[33141]: E0308 03:35:09.366484 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:09.366598 master-0 kubenswrapper[33141]: E0308 03:35:09.366567 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert podName:04802a97-e959-423f-8ca7-4a8fb5e7e047 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:10.36653956 +0000 UTC m=+224.236432783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert") pod "console-748f76c866-99l2l" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047") : secret "console-serving-cert" not found Mar 08 03:35:10.381557 master-0 kubenswrapper[33141]: I0308 03:35:10.381478 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:10.382392 master-0 kubenswrapper[33141]: E0308 03:35:10.381757 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:10.382392 master-0 kubenswrapper[33141]: E0308 03:35:10.381890 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert podName:04802a97-e959-423f-8ca7-4a8fb5e7e047 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:12.381858927 +0000 UTC m=+226.251752150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert") pod "console-748f76c866-99l2l" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047") : secret "console-serving-cert" not found Mar 08 03:35:10.789131 master-0 kubenswrapper[33141]: I0308 03:35:10.789059 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") pod \"console-fd44b487d-l5wc7\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:35:10.789408 master-0 kubenswrapper[33141]: E0308 03:35:10.789339 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:10.789548 master-0 kubenswrapper[33141]: E0308 03:35:10.789507 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert podName:31335248-972e-4193-8525-86cdc3f2ad4f nodeName:}" failed. No retries permitted until 2026-03-08 03:36:14.78946906 +0000 UTC m=+288.659362303 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert") pod "console-fd44b487d-l5wc7" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f") : secret "console-serving-cert" not found Mar 08 03:35:12.413761 master-0 kubenswrapper[33141]: I0308 03:35:12.413654 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:12.414648 master-0 kubenswrapper[33141]: E0308 03:35:12.414113 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:12.414648 master-0 kubenswrapper[33141]: E0308 03:35:12.414240 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert podName:04802a97-e959-423f-8ca7-4a8fb5e7e047 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:16.414209024 +0000 UTC m=+230.284102247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert") pod "console-748f76c866-99l2l" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047") : secret "console-serving-cert" not found Mar 08 03:35:16.482436 master-0 kubenswrapper[33141]: I0308 03:35:16.482343 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:16.484538 master-0 kubenswrapper[33141]: E0308 03:35:16.482612 33141 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: secret "console-serving-cert" not found Mar 08 03:35:16.484538 master-0 kubenswrapper[33141]: E0308 03:35:16.483509 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert podName:04802a97-e959-423f-8ca7-4a8fb5e7e047 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:24.483479199 +0000 UTC m=+238.353372402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert") pod "console-748f76c866-99l2l" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047") : secret "console-serving-cert" not found Mar 08 03:35:19.851212 master-0 kubenswrapper[33141]: I0308 03:35:19.851145 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 08 03:35:19.852035 master-0 kubenswrapper[33141]: I0308 03:35:19.851896 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:35:19.852485 master-0 kubenswrapper[33141]: I0308 03:35:19.852423 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" exitCode=1 Mar 08 03:35:19.852485 master-0 kubenswrapper[33141]: I0308 03:35:19.852476 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} Mar 08 03:35:19.853263 master-0 kubenswrapper[33141]: I0308 03:35:19.853216 33141 scope.go:117] "RemoveContainer" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:35:20.865221 master-0 kubenswrapper[33141]: I0308 03:35:20.865149 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 08 03:35:20.866243 master-0 kubenswrapper[33141]: I0308 03:35:20.866159 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:35:20.866832 master-0 kubenswrapper[33141]: I0308 03:35:20.866731 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429"} Mar 08 03:35:24.513132 master-0 kubenswrapper[33141]: I0308 03:35:24.512986 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:24.518319 master-0 kubenswrapper[33141]: I0308 03:35:24.518264 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"console-748f76c866-99l2l\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:24.601335 master-0 kubenswrapper[33141]: I0308 03:35:24.601233 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-9h99f" Mar 08 03:35:24.609512 master-0 kubenswrapper[33141]: I0308 03:35:24.609459 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:25.100164 master-0 kubenswrapper[33141]: I0308 03:35:25.100095 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:35:25.107353 master-0 kubenswrapper[33141]: W0308 03:35:25.107292 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04802a97_e959_423f_8ca7_4a8fb5e7e047.slice/crio-71f051994fd419869febab55e4b9ee893ce52aa603dd3d24069a362a33529882 WatchSource:0}: Error finding container 71f051994fd419869febab55e4b9ee893ce52aa603dd3d24069a362a33529882: Status 404 returned error can't find the container with id 71f051994fd419869febab55e4b9ee893ce52aa603dd3d24069a362a33529882 Mar 08 03:35:25.924555 master-0 kubenswrapper[33141]: I0308 03:35:25.924375 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-748f76c866-99l2l" event={"ID":"04802a97-e959-423f-8ca7-4a8fb5e7e047","Type":"ContainerStarted","Data":"71f051994fd419869febab55e4b9ee893ce52aa603dd3d24069a362a33529882"} Mar 08 03:35:26.446115 master-0 kubenswrapper[33141]: I0308 03:35:26.446057 33141 scope.go:117] "RemoveContainer" containerID="f6002d889d471a68aa7f22937f49a82a2b4b24ab138311c194765fb16289177d" Mar 08 03:35:29.967195 master-0 kubenswrapper[33141]: I0308 03:35:29.967130 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-748f76c866-99l2l" event={"ID":"04802a97-e959-423f-8ca7-4a8fb5e7e047","Type":"ContainerStarted","Data":"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470"} Mar 08 03:35:33.177589 master-0 kubenswrapper[33141]: I0308 03:35:33.177487 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-748f76c866-99l2l" podStartSLOduration=21.48436241 podStartE2EDuration="25.177466583s" podCreationTimestamp="2026-03-08 03:35:08 +0000 UTC" firstStartedPulling="2026-03-08 03:35:25.110819744 +0000 UTC m=+238.980712947" lastFinishedPulling="2026-03-08 03:35:28.803923927 +0000 UTC m=+242.673817120" observedRunningTime="2026-03-08 03:35:29.99627221 +0000 UTC m=+243.866165463" watchObservedRunningTime="2026-03-08 03:35:33.177466583 +0000 UTC m=+247.047359796" Mar 08 03:35:33.183983 master-0 kubenswrapper[33141]: I0308 03:35:33.183922 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd44b487d-l5wc7"] Mar 08 03:35:33.184433 master-0 kubenswrapper[33141]: E0308 03:35:33.184395 33141 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[console-serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-console/console-fd44b487d-l5wc7" podUID="31335248-972e-4193-8525-86cdc3f2ad4f" Mar 08 03:35:33.253647 master-0 kubenswrapper[33141]: I0308 03:35:33.253570 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:35:33.254623 master-0 kubenswrapper[33141]: I0308 03:35:33.254587 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277292 master-0 kubenswrapper[33141]: I0308 03:35:33.277233 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277496 master-0 kubenswrapper[33141]: I0308 03:35:33.277313 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tc8x\" (UniqueName: \"kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277496 master-0 kubenswrapper[33141]: I0308 03:35:33.277361 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277496 master-0 kubenswrapper[33141]: I0308 03:35:33.277386 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277496 master-0 kubenswrapper[33141]: I0308 03:35:33.277413 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277496 master-0 kubenswrapper[33141]: I0308 03:35:33.277438 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.277659 master-0 kubenswrapper[33141]: I0308 03:35:33.277489 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.296180 master-0 kubenswrapper[33141]: I0308 03:35:33.296118 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:35:33.378238 master-0 kubenswrapper[33141]: I0308 03:35:33.378175 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378480 master-0 kubenswrapper[33141]: I0308 03:35:33.378420 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tc8x\" (UniqueName: \"kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378620 master-0 kubenswrapper[33141]: I0308 03:35:33.378579 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378769 master-0 kubenswrapper[33141]: I0308 03:35:33.378745 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378822 master-0 kubenswrapper[33141]: I0308 03:35:33.378777 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378822 master-0 kubenswrapper[33141]: I0308 03:35:33.378799 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.378981 master-0 kubenswrapper[33141]: I0308 03:35:33.378956 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.379029 master-0 kubenswrapper[33141]: I0308 03:35:33.378970 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.379514 master-0 kubenswrapper[33141]: I0308 03:35:33.379479 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.379616 master-0 kubenswrapper[33141]: I0308 03:35:33.379590 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.380689 master-0 kubenswrapper[33141]: I0308 03:35:33.380655 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.381638 master-0 kubenswrapper[33141]: I0308 03:35:33.381614 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.382267 master-0 kubenswrapper[33141]: I0308 03:35:33.382222 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.396603 master-0 kubenswrapper[33141]: I0308 03:35:33.396545 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tc8x\" (UniqueName: \"kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x\") pod \"console-6fbfcd994f-49ft7\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.576108 master-0 kubenswrapper[33141]: I0308 03:35:33.576011 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:33.999963 master-0 kubenswrapper[33141]: I0308 03:35:33.999759 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:35:34.013118 master-0 kubenswrapper[33141]: I0308 03:35:34.013038 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:35:34.093178 master-0 kubenswrapper[33141]: I0308 03:35:34.093112 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config\") pod \"31335248-972e-4193-8525-86cdc3f2ad4f\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " Mar 08 03:35:34.093402 master-0 kubenswrapper[33141]: I0308 03:35:34.093368 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config\") pod \"31335248-972e-4193-8525-86cdc3f2ad4f\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " Mar 08 03:35:34.093443 master-0 kubenswrapper[33141]: I0308 03:35:34.093430 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2nxt\" (UniqueName: \"kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt\") pod \"31335248-972e-4193-8525-86cdc3f2ad4f\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " Mar 08 03:35:34.093507 master-0 kubenswrapper[33141]: I0308 03:35:34.093479 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca\") pod \"31335248-972e-4193-8525-86cdc3f2ad4f\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " Mar 08 03:35:34.093575 master-0 kubenswrapper[33141]: I0308 03:35:34.093549 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert\") pod \"31335248-972e-4193-8525-86cdc3f2ad4f\" (UID: \"31335248-972e-4193-8525-86cdc3f2ad4f\") " Mar 08 03:35:34.094650 master-0 kubenswrapper[33141]: I0308 03:35:34.094595 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "31335248-972e-4193-8525-86cdc3f2ad4f" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:34.097653 master-0 kubenswrapper[33141]: I0308 03:35:34.097428 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca" (OuterVolumeSpecName: "service-ca") pod "31335248-972e-4193-8525-86cdc3f2ad4f" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:34.097653 master-0 kubenswrapper[33141]: I0308 03:35:34.097595 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config" (OuterVolumeSpecName: "console-config") pod "31335248-972e-4193-8525-86cdc3f2ad4f" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:34.101534 master-0 kubenswrapper[33141]: I0308 03:35:34.101462 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt" (OuterVolumeSpecName: "kube-api-access-l2nxt") pod "31335248-972e-4193-8525-86cdc3f2ad4f" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f"). InnerVolumeSpecName "kube-api-access-l2nxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:35:34.104009 master-0 kubenswrapper[33141]: I0308 03:35:34.103976 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "31335248-972e-4193-8525-86cdc3f2ad4f" (UID: "31335248-972e-4193-8525-86cdc3f2ad4f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:34.120802 master-0 kubenswrapper[33141]: I0308 03:35:34.120756 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:35:34.128788 master-0 kubenswrapper[33141]: W0308 03:35:34.128762 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a1244d_2bc6_40c7_96c7_8e464a55ff4b.slice/crio-47d3f371c33823a483f2a669c21d59d08d6fdfe7d6cdeb4147f85bd9f5708416 WatchSource:0}: Error finding container 47d3f371c33823a483f2a669c21d59d08d6fdfe7d6cdeb4147f85bd9f5708416: Status 404 returned error can't find the container with id 47d3f371c33823a483f2a669c21d59d08d6fdfe7d6cdeb4147f85bd9f5708416 Mar 08 03:35:34.194341 master-0 kubenswrapper[33141]: I0308 03:35:34.194285 33141 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:34.194341 master-0 kubenswrapper[33141]: I0308 03:35:34.194327 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2nxt\" (UniqueName: \"kubernetes.io/projected/31335248-972e-4193-8525-86cdc3f2ad4f-kube-api-access-l2nxt\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:34.194341 master-0 kubenswrapper[33141]: I0308 03:35:34.194341 33141 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:34.194341 master-0 kubenswrapper[33141]: I0308 03:35:34.194350 33141 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31335248-972e-4193-8525-86cdc3f2ad4f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:34.194924 master-0 kubenswrapper[33141]: I0308 03:35:34.194360 33141 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:34.609663 master-0 kubenswrapper[33141]: I0308 03:35:34.609597 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:34.609663 master-0 kubenswrapper[33141]: I0308 03:35:34.609688 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:35:34.611725 master-0 kubenswrapper[33141]: I0308 03:35:34.611651 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:35:34.611870 master-0 kubenswrapper[33141]: I0308 03:35:34.611730 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:35:35.021098 master-0 kubenswrapper[33141]: I0308 03:35:35.021030 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd44b487d-l5wc7" Mar 08 03:35:35.021098 master-0 kubenswrapper[33141]: I0308 03:35:35.021007 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbfcd994f-49ft7" event={"ID":"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b","Type":"ContainerStarted","Data":"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d"} Mar 08 03:35:35.021508 master-0 kubenswrapper[33141]: I0308 03:35:35.021135 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbfcd994f-49ft7" event={"ID":"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b","Type":"ContainerStarted","Data":"47d3f371c33823a483f2a669c21d59d08d6fdfe7d6cdeb4147f85bd9f5708416"} Mar 08 03:35:35.160499 master-0 kubenswrapper[33141]: I0308 03:35:35.160388 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd44b487d-l5wc7"] Mar 08 03:35:35.171629 master-0 kubenswrapper[33141]: I0308 03:35:35.171434 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-fd44b487d-l5wc7"] Mar 08 03:35:35.177291 master-0 kubenswrapper[33141]: I0308 03:35:35.177123 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6fbfcd994f-49ft7" podStartSLOduration=2.177047765 podStartE2EDuration="2.177047765s" podCreationTimestamp="2026-03-08 03:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:35:35.172818414 +0000 UTC m=+249.042711667" watchObservedRunningTime="2026-03-08 03:35:35.177047765 +0000 UTC m=+249.046941008" Mar 08 03:35:35.216179 master-0 kubenswrapper[33141]: I0308 03:35:35.216097 33141 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31335248-972e-4193-8525-86cdc3f2ad4f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:36.370460 master-0 kubenswrapper[33141]: I0308 03:35:36.370342 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31335248-972e-4193-8525-86cdc3f2ad4f" path="/var/lib/kubelet/pods/31335248-972e-4193-8525-86cdc3f2ad4f/volumes" Mar 08 03:35:43.576783 master-0 kubenswrapper[33141]: I0308 03:35:43.576672 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:43.576783 master-0 kubenswrapper[33141]: I0308 03:35:43.576760 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:35:43.579704 master-0 kubenswrapper[33141]: I0308 03:35:43.579623 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:35:43.579845 master-0 kubenswrapper[33141]: I0308 03:35:43.579708 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:35:44.610318 master-0 kubenswrapper[33141]: I0308 03:35:44.610201 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:35:44.611203 master-0 kubenswrapper[33141]: I0308 03:35:44.610333 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:35:46.275247 master-0 kubenswrapper[33141]: I0308 03:35:46.275148 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5d54745774-gwhkw"] Mar 08 03:35:46.276299 master-0 kubenswrapper[33141]: I0308 03:35:46.276255 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.280003 master-0 kubenswrapper[33141]: I0308 03:35:46.279937 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 03:35:46.280174 master-0 kubenswrapper[33141]: I0308 03:35:46.280062 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 03:35:46.280559 master-0 kubenswrapper[33141]: I0308 03:35:46.280517 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 03:35:46.280946 master-0 kubenswrapper[33141]: I0308 03:35:46.280885 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 03:35:46.281389 master-0 kubenswrapper[33141]: I0308 03:35:46.281319 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 03:35:46.281731 master-0 kubenswrapper[33141]: I0308 03:35:46.281688 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8hcmx" Mar 08 03:35:46.281836 master-0 kubenswrapper[33141]: I0308 03:35:46.281774 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 03:35:46.282128 master-0 kubenswrapper[33141]: I0308 03:35:46.282088 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 03:35:46.287185 master-0 kubenswrapper[33141]: I0308 03:35:46.287126 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 03:35:46.287412 master-0 kubenswrapper[33141]: I0308 03:35:46.287373 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 03:35:46.289771 master-0 kubenswrapper[33141]: I0308 03:35:46.289728 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 03:35:46.290426 master-0 kubenswrapper[33141]: I0308 03:35:46.289601 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 03:35:46.302462 master-0 kubenswrapper[33141]: I0308 03:35:46.302397 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302732 master-0 kubenswrapper[33141]: I0308 03:35:46.302499 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302732 master-0 kubenswrapper[33141]: I0308 03:35:46.302585 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302732 master-0 kubenswrapper[33141]: I0308 03:35:46.302698 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302959 master-0 kubenswrapper[33141]: I0308 03:35:46.302755 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302959 master-0 kubenswrapper[33141]: I0308 03:35:46.302826 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302959 master-0 kubenswrapper[33141]: I0308 03:35:46.302870 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.302959 master-0 kubenswrapper[33141]: I0308 03:35:46.302894 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.303192 master-0 kubenswrapper[33141]: I0308 03:35:46.303009 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.303192 master-0 kubenswrapper[33141]: I0308 03:35:46.303080 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2kw\" (UniqueName: \"kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.303192 master-0 kubenswrapper[33141]: I0308 03:35:46.303116 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.303379 master-0 kubenswrapper[33141]: I0308 03:35:46.303276 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.303379 master-0 kubenswrapper[33141]: I0308 03:35:46.303337 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.307455 master-0 kubenswrapper[33141]: I0308 03:35:46.307394 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 03:35:46.316064 master-0 kubenswrapper[33141]: I0308 03:35:46.315413 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 03:35:46.322032 master-0 kubenswrapper[33141]: I0308 03:35:46.321957 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5d54745774-gwhkw"] Mar 08 03:35:46.404711 master-0 kubenswrapper[33141]: I0308 03:35:46.404633 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.404711 master-0 kubenswrapper[33141]: I0308 03:35:46.404720 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405293 master-0 kubenswrapper[33141]: I0308 03:35:46.405264 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405336 master-0 kubenswrapper[33141]: I0308 03:35:46.405297 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405336 master-0 kubenswrapper[33141]: I0308 03:35:46.405318 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405402 master-0 kubenswrapper[33141]: I0308 03:35:46.405344 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405402 master-0 kubenswrapper[33141]: I0308 03:35:46.405368 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405402 master-0 kubenswrapper[33141]: I0308 03:35:46.405386 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.405486 master-0 kubenswrapper[33141]: E0308 03:35:46.405466 33141 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:46.405537 master-0 kubenswrapper[33141]: E0308 03:35:46.405517 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:46.905500484 +0000 UTC m=+260.775393677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:46.405637 master-0 kubenswrapper[33141]: I0308 03:35:46.405572 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.406116 master-0 kubenswrapper[33141]: E0308 03:35:46.406066 33141 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 08 03:35:46.406190 master-0 kubenswrapper[33141]: E0308 03:35:46.406169 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:46.906145781 +0000 UTC m=+260.776038984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : secret "v4-0-config-system-session" not found Mar 08 03:35:46.406245 master-0 kubenswrapper[33141]: I0308 03:35:46.406232 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.406358 master-0 kubenswrapper[33141]: I0308 03:35:46.406311 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl2kw\" (UniqueName: \"kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.406358 master-0 kubenswrapper[33141]: I0308 03:35:46.406352 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.406458 master-0 kubenswrapper[33141]: I0308 03:35:46.406418 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.406458 master-0 kubenswrapper[33141]: I0308 03:35:46.406443 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.407304 master-0 kubenswrapper[33141]: I0308 03:35:46.406563 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.407304 master-0 kubenswrapper[33141]: I0308 03:35:46.406563 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.409317 master-0 kubenswrapper[33141]: I0308 03:35:46.409285 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.409317 master-0 kubenswrapper[33141]: I0308 03:35:46.409297 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.410322 master-0 kubenswrapper[33141]: I0308 03:35:46.410277 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.410380 master-0 kubenswrapper[33141]: I0308 03:35:46.410363 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.410489 master-0 kubenswrapper[33141]: I0308 03:35:46.410438 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.412547 master-0 kubenswrapper[33141]: I0308 03:35:46.412496 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.413585 master-0 kubenswrapper[33141]: I0308 03:35:46.413539 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.424008 master-0 kubenswrapper[33141]: I0308 03:35:46.423890 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl2kw\" (UniqueName: \"kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.913329 master-0 kubenswrapper[33141]: I0308 03:35:46.913268 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.913722 master-0 kubenswrapper[33141]: E0308 03:35:46.913437 33141 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:46.913785 master-0 kubenswrapper[33141]: E0308 03:35:46.913766 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:47.913748928 +0000 UTC m=+261.783642121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:46.913929 master-0 kubenswrapper[33141]: I0308 03:35:46.913888 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:46.914153 master-0 kubenswrapper[33141]: E0308 03:35:46.914092 33141 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 08 03:35:46.914231 master-0 kubenswrapper[33141]: E0308 03:35:46.914202 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:47.914180289 +0000 UTC m=+261.784073472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : secret "v4-0-config-system-session" not found Mar 08 03:35:47.198950 master-0 kubenswrapper[33141]: I0308 03:35:47.198785 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 03:35:47.200955 master-0 kubenswrapper[33141]: I0308 03:35:47.200931 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.203915 master-0 kubenswrapper[33141]: I0308 03:35:47.203868 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 08 03:35:47.204135 master-0 kubenswrapper[33141]: I0308 03:35:47.203870 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 08 03:35:47.204198 master-0 kubenswrapper[33141]: I0308 03:35:47.203890 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 08 03:35:47.204198 master-0 kubenswrapper[33141]: I0308 03:35:47.203930 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 08 03:35:47.204315 master-0 kubenswrapper[33141]: I0308 03:35:47.203942 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 08 03:35:47.206133 master-0 kubenswrapper[33141]: I0308 03:35:47.205046 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 08 03:35:47.208401 master-0 kubenswrapper[33141]: I0308 03:35:47.208344 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 08 03:35:47.220093 master-0 kubenswrapper[33141]: I0308 03:35:47.220017 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 08 03:35:47.222843 master-0 kubenswrapper[33141]: I0308 03:35:47.221148 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 03:35:47.318234 master-0 kubenswrapper[33141]: I0308 03:35:47.318169 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.318234 master-0 kubenswrapper[33141]: I0308 03:35:47.318222 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318452 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-web-config\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318489 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318509 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318543 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-out\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318567 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-volume\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318615 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318635 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318664 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318727 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.319115 master-0 kubenswrapper[33141]: I0308 03:35:47.318750 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdnsl\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-kube-api-access-xdnsl\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.420366 master-0 kubenswrapper[33141]: I0308 03:35:47.420299 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-web-config\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.420366 master-0 kubenswrapper[33141]: I0308 03:35:47.420377 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.420657 master-0 kubenswrapper[33141]: I0308 03:35:47.420414 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.420657 master-0 kubenswrapper[33141]: I0308 03:35:47.420576 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-volume\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421295 master-0 kubenswrapper[33141]: I0308 03:35:47.421247 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-out\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421381 master-0 kubenswrapper[33141]: I0308 03:35:47.421294 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421381 master-0 kubenswrapper[33141]: I0308 03:35:47.421334 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421381 master-0 kubenswrapper[33141]: I0308 03:35:47.421372 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421598 master-0 kubenswrapper[33141]: I0308 03:35:47.421413 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421598 master-0 kubenswrapper[33141]: I0308 03:35:47.421469 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421598 master-0 kubenswrapper[33141]: I0308 03:35:47.421499 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdnsl\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-kube-api-access-xdnsl\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421598 master-0 kubenswrapper[33141]: I0308 03:35:47.421571 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.421775 master-0 kubenswrapper[33141]: I0308 03:35:47.421605 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.423893 master-0 kubenswrapper[33141]: I0308 03:35:47.423837 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.424133 master-0 kubenswrapper[33141]: I0308 03:35:47.424093 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-web-config\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.424657 master-0 kubenswrapper[33141]: I0308 03:35:47.424606 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-tls-assets\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.424986 master-0 kubenswrapper[33141]: I0308 03:35:47.424886 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-volume\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.425385 master-0 kubenswrapper[33141]: I0308 03:35:47.425357 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.431921 master-0 kubenswrapper[33141]: I0308 03:35:47.429085 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.431921 master-0 kubenswrapper[33141]: I0308 03:35:47.430463 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.436385 master-0 kubenswrapper[33141]: I0308 03:35:47.435574 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.454541 master-0 kubenswrapper[33141]: I0308 03:35:47.454419 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.459326 master-0 kubenswrapper[33141]: I0308 03:35:47.459282 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-config-out\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.476608 master-0 kubenswrapper[33141]: I0308 03:35:47.476578 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdnsl\" (UniqueName: \"kubernetes.io/projected/cbd6f132-3aa4-4114-9a59-e69aafa4cd1d-kube-api-access-xdnsl\") pod \"alertmanager-main-0\" (UID: \"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d\") " pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.520162 master-0 kubenswrapper[33141]: I0308 03:35:47.518696 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 08 03:35:47.936027 master-0 kubenswrapper[33141]: I0308 03:35:47.935922 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:47.936301 master-0 kubenswrapper[33141]: I0308 03:35:47.936060 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:47.936301 master-0 kubenswrapper[33141]: E0308 03:35:47.936067 33141 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 08 03:35:47.936301 master-0 kubenswrapper[33141]: E0308 03:35:47.936151 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:49.936131318 +0000 UTC m=+263.806024511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : secret "v4-0-config-system-session" not found Mar 08 03:35:47.936301 master-0 kubenswrapper[33141]: E0308 03:35:47.936235 33141 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:47.936589 master-0 kubenswrapper[33141]: E0308 03:35:47.936334 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:49.936311553 +0000 UTC m=+263.806204756 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:47.975133 master-0 kubenswrapper[33141]: I0308 03:35:47.973690 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 08 03:35:47.979716 master-0 kubenswrapper[33141]: W0308 03:35:47.979655 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd6f132_3aa4_4114_9a59_e69aafa4cd1d.slice/crio-b82ef2ebb7244da9898a3a839e60b52118012a0041cd5b2dc01f62830fad0578 WatchSource:0}: Error finding container b82ef2ebb7244da9898a3a839e60b52118012a0041cd5b2dc01f62830fad0578: Status 404 returned error can't find the container with id b82ef2ebb7244da9898a3a839e60b52118012a0041cd5b2dc01f62830fad0578 Mar 08 03:35:48.140805 master-0 kubenswrapper[33141]: I0308 03:35:48.140749 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"130f778d58496b7c3f85a3dee24addd90836daddd22358a79efe6725e33464e0"} Mar 08 03:35:48.140805 master-0 kubenswrapper[33141]: I0308 03:35:48.140792 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"b82ef2ebb7244da9898a3a839e60b52118012a0041cd5b2dc01f62830fad0578"} Mar 08 03:35:48.252299 master-0 kubenswrapper[33141]: I0308 03:35:48.252145 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-cc54f9d45-86rbf"] Mar 08 03:35:48.254890 master-0 kubenswrapper[33141]: I0308 03:35:48.254844 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.258123 master-0 kubenswrapper[33141]: I0308 03:35:48.258065 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9c93c1bm2nqd1" Mar 08 03:35:48.258270 master-0 kubenswrapper[33141]: I0308 03:35:48.258183 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 08 03:35:48.258414 master-0 kubenswrapper[33141]: I0308 03:35:48.258375 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 08 03:35:48.258505 master-0 kubenswrapper[33141]: I0308 03:35:48.258493 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 08 03:35:48.258628 master-0 kubenswrapper[33141]: I0308 03:35:48.258594 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 08 03:35:48.258714 master-0 kubenswrapper[33141]: I0308 03:35:48.258666 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 08 03:35:48.289634 master-0 kubenswrapper[33141]: I0308 03:35:48.289193 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc54f9d45-86rbf"] Mar 08 03:35:48.342272 master-0 kubenswrapper[33141]: I0308 03:35:48.342221 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.342885 master-0 kubenswrapper[33141]: I0308 03:35:48.342859 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343104 master-0 kubenswrapper[33141]: I0308 03:35:48.343081 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343242 master-0 kubenswrapper[33141]: I0308 03:35:48.343224 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343459 master-0 kubenswrapper[33141]: I0308 03:35:48.343416 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjgt\" (UniqueName: \"kubernetes.io/projected/e26c5ed4-e811-4efd-a607-41e0953c1d8a-kube-api-access-zkjgt\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343529 master-0 kubenswrapper[33141]: I0308 03:35:48.343471 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26c5ed4-e811-4efd-a607-41e0953c1d8a-metrics-client-ca\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343578 master-0 kubenswrapper[33141]: I0308 03:35:48.343568 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.343622 master-0 kubenswrapper[33141]: I0308 03:35:48.343605 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-grpc-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.444877 master-0 kubenswrapper[33141]: I0308 03:35:48.444800 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.444877 master-0 kubenswrapper[33141]: I0308 03:35:48.444869 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.445314 master-0 kubenswrapper[33141]: I0308 03:35:48.445219 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.445462 master-0 kubenswrapper[33141]: I0308 03:35:48.445419 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.445608 master-0 kubenswrapper[33141]: I0308 03:35:48.445566 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkjgt\" (UniqueName: \"kubernetes.io/projected/e26c5ed4-e811-4efd-a607-41e0953c1d8a-kube-api-access-zkjgt\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.445707 master-0 kubenswrapper[33141]: I0308 03:35:48.445670 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26c5ed4-e811-4efd-a607-41e0953c1d8a-metrics-client-ca\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.445924 master-0 kubenswrapper[33141]: I0308 03:35:48.445868 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.446048 master-0 kubenswrapper[33141]: I0308 03:35:48.446009 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-grpc-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.446654 master-0 kubenswrapper[33141]: I0308 03:35:48.446587 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e26c5ed4-e811-4efd-a607-41e0953c1d8a-metrics-client-ca\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.448494 master-0 kubenswrapper[33141]: I0308 03:35:48.448456 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.448981 master-0 kubenswrapper[33141]: I0308 03:35:48.448915 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.449086 master-0 kubenswrapper[33141]: I0308 03:35:48.449032 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.449333 master-0 kubenswrapper[33141]: I0308 03:35:48.449288 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.449426 master-0 kubenswrapper[33141]: I0308 03:35:48.449334 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.451474 master-0 kubenswrapper[33141]: I0308 03:35:48.451370 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/e26c5ed4-e811-4efd-a607-41e0953c1d8a-secret-grpc-tls\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.463626 master-0 kubenswrapper[33141]: I0308 03:35:48.463568 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkjgt\" (UniqueName: \"kubernetes.io/projected/e26c5ed4-e811-4efd-a607-41e0953c1d8a-kube-api-access-zkjgt\") pod \"thanos-querier-cc54f9d45-86rbf\" (UID: \"e26c5ed4-e811-4efd-a607-41e0953c1d8a\") " pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:48.600749 master-0 kubenswrapper[33141]: I0308 03:35:48.600681 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:49.070749 master-0 kubenswrapper[33141]: I0308 03:35:49.070697 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc54f9d45-86rbf"] Mar 08 03:35:49.089840 master-0 kubenswrapper[33141]: W0308 03:35:49.088383 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode26c5ed4_e811_4efd_a607_41e0953c1d8a.slice/crio-b36439037e0411f74b4332a9931fc7e71d109390dcb2e2ec1b41f244052d094c WatchSource:0}: Error finding container b36439037e0411f74b4332a9931fc7e71d109390dcb2e2ec1b41f244052d094c: Status 404 returned error can't find the container with id b36439037e0411f74b4332a9931fc7e71d109390dcb2e2ec1b41f244052d094c Mar 08 03:35:49.150158 master-0 kubenswrapper[33141]: I0308 03:35:49.150046 33141 generic.go:334] "Generic (PLEG): container finished" podID="cbd6f132-3aa4-4114-9a59-e69aafa4cd1d" containerID="130f778d58496b7c3f85a3dee24addd90836daddd22358a79efe6725e33464e0" exitCode=0 Mar 08 03:35:49.150158 master-0 kubenswrapper[33141]: I0308 03:35:49.150135 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerDied","Data":"130f778d58496b7c3f85a3dee24addd90836daddd22358a79efe6725e33464e0"} Mar 08 03:35:49.156856 master-0 kubenswrapper[33141]: I0308 03:35:49.155714 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"b36439037e0411f74b4332a9931fc7e71d109390dcb2e2ec1b41f244052d094c"} Mar 08 03:35:49.971250 master-0 kubenswrapper[33141]: I0308 03:35:49.971174 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:49.971250 master-0 kubenswrapper[33141]: I0308 03:35:49.971255 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:49.972117 master-0 kubenswrapper[33141]: E0308 03:35:49.971359 33141 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:49.972117 master-0 kubenswrapper[33141]: E0308 03:35:49.971432 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig podName:dd43b0a4-2149-4ae0-8493-de3dc307b334 nodeName:}" failed. No retries permitted until 2026-03-08 03:35:53.971409603 +0000 UTC m=+267.841302806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig") pod "oauth-openshift-5d54745774-gwhkw" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334") : configmap "v4-0-config-system-cliconfig" not found Mar 08 03:35:49.974738 master-0 kubenswrapper[33141]: I0308 03:35:49.974674 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:50.948942 master-0 kubenswrapper[33141]: I0308 03:35:50.948875 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-f8578dbbb-gzqxh"] Mar 08 03:35:50.960558 master-0 kubenswrapper[33141]: I0308 03:35:50.960514 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:50.973365 master-0 kubenswrapper[33141]: I0308 03:35:50.972917 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-99d2o2jhvt58t" Mar 08 03:35:50.977053 master-0 kubenswrapper[33141]: I0308 03:35:50.976999 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:35:50.977326 master-0 kubenswrapper[33141]: I0308 03:35:50.977252 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerName="metrics-server" containerID="cri-o://f76a1bff6446c8bbd3a34e5b92f198922251d11d225fb45f11ae978bed808876" gracePeriod=170 Mar 08 03:35:50.985299 master-0 kubenswrapper[33141]: I0308 03:35:50.985238 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f8578dbbb-gzqxh"] Mar 08 03:35:51.087511 master-0 kubenswrapper[33141]: I0308 03:35:51.087461 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-client-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.087792 master-0 kubenswrapper[33141]: I0308 03:35:51.087775 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-client-certs\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.087906 master-0 kubenswrapper[33141]: I0308 03:35:51.087878 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-server-tls\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.088001 master-0 kubenswrapper[33141]: I0308 03:35:51.087958 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.088054 master-0 kubenswrapper[33141]: I0308 03:35:51.088022 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khs69\" (UniqueName: \"kubernetes.io/projected/6701b05d-5128-437f-9c1c-6fbbf80d5db8-kube-api-access-khs69\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.088101 master-0 kubenswrapper[33141]: I0308 03:35:51.088084 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6701b05d-5128-437f-9c1c-6fbbf80d5db8-audit-log\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.088137 master-0 kubenswrapper[33141]: I0308 03:35:51.088118 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-metrics-server-audit-profiles\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.188893 master-0 kubenswrapper[33141]: I0308 03:35:51.188834 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khs69\" (UniqueName: \"kubernetes.io/projected/6701b05d-5128-437f-9c1c-6fbbf80d5db8-kube-api-access-khs69\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.188893 master-0 kubenswrapper[33141]: I0308 03:35:51.188905 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6701b05d-5128-437f-9c1c-6fbbf80d5db8-audit-log\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189266 master-0 kubenswrapper[33141]: I0308 03:35:51.188969 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-metrics-server-audit-profiles\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189266 master-0 kubenswrapper[33141]: I0308 03:35:51.189008 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-client-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189266 master-0 kubenswrapper[33141]: I0308 03:35:51.189226 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-client-certs\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189923 master-0 kubenswrapper[33141]: I0308 03:35:51.189308 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-server-tls\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189923 master-0 kubenswrapper[33141]: I0308 03:35:51.189375 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.189923 master-0 kubenswrapper[33141]: I0308 03:35:51.189858 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6701b05d-5128-437f-9c1c-6fbbf80d5db8-audit-log\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.190507 master-0 kubenswrapper[33141]: I0308 03:35:51.190120 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.190507 master-0 kubenswrapper[33141]: I0308 03:35:51.190293 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6701b05d-5128-437f-9c1c-6fbbf80d5db8-metrics-server-audit-profiles\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.193217 master-0 kubenswrapper[33141]: I0308 03:35:51.193182 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-client-certs\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.195173 master-0 kubenswrapper[33141]: I0308 03:35:51.194674 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-secret-metrics-server-tls\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.195173 master-0 kubenswrapper[33141]: I0308 03:35:51.195122 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6701b05d-5128-437f-9c1c-6fbbf80d5db8-client-ca-bundle\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.205416 master-0 kubenswrapper[33141]: I0308 03:35:51.205343 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khs69\" (UniqueName: \"kubernetes.io/projected/6701b05d-5128-437f-9c1c-6fbbf80d5db8-kube-api-access-khs69\") pod \"metrics-server-f8578dbbb-gzqxh\" (UID: \"6701b05d-5128-437f-9c1c-6fbbf80d5db8\") " pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:51.304845 master-0 kubenswrapper[33141]: I0308 03:35:51.304802 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:35:52.044629 master-0 kubenswrapper[33141]: I0308 03:35:52.044558 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-f8578dbbb-gzqxh"] Mar 08 03:35:52.052101 master-0 kubenswrapper[33141]: W0308 03:35:52.052055 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6701b05d_5128_437f_9c1c_6fbbf80d5db8.slice/crio-79e49a4286f66591408166478e9d3bf4abd3ab15cc428c7616bf6aeaed8bb4aa WatchSource:0}: Error finding container 79e49a4286f66591408166478e9d3bf4abd3ab15cc428c7616bf6aeaed8bb4aa: Status 404 returned error can't find the container with id 79e49a4286f66591408166478e9d3bf4abd3ab15cc428c7616bf6aeaed8bb4aa Mar 08 03:35:52.175971 master-0 kubenswrapper[33141]: I0308 03:35:52.175921 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" event={"ID":"6701b05d-5128-437f-9c1c-6fbbf80d5db8","Type":"ContainerStarted","Data":"79e49a4286f66591408166478e9d3bf4abd3ab15cc428c7616bf6aeaed8bb4aa"} Mar 08 03:35:52.178277 master-0 kubenswrapper[33141]: I0308 03:35:52.178219 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"3a0b80c4e2f52aea04fd88f8e5154c7b9cd18fb01940956c817cd79b1e1963da"} Mar 08 03:35:52.178277 master-0 kubenswrapper[33141]: I0308 03:35:52.178274 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"4d153d6f6012d5baa57ef7340973bfc53e7fbc527ed31d16adada08ae97d00ef"} Mar 08 03:35:52.178421 master-0 kubenswrapper[33141]: I0308 03:35:52.178287 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"038fe03353114cd3be8651412217741e530d620ecbbb7f05e53549ab25c441fe"} Mar 08 03:35:52.184079 master-0 kubenswrapper[33141]: I0308 03:35:52.182334 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"c6b449c7beeaa0b75246ece1befefdc7597f1904ab85a2d5200b5d00cd223a25"} Mar 08 03:35:52.184079 master-0 kubenswrapper[33141]: I0308 03:35:52.182368 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"4c64dc2f4be30402cedc72be823fc1a977d7745877acc799190dd219f98d8f02"} Mar 08 03:35:52.184079 master-0 kubenswrapper[33141]: I0308 03:35:52.182378 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"4d9846585faf964e5224238aa7424103ed96b14493581c90b6f0c1651711c099"} Mar 08 03:35:52.456390 master-0 kubenswrapper[33141]: I0308 03:35:52.456334 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5d54745774-gwhkw"] Mar 08 03:35:52.456829 master-0 kubenswrapper[33141]: E0308 03:35:52.456797 33141 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[v4-0-config-system-cliconfig], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" podUID="dd43b0a4-2149-4ae0-8493-de3dc307b334" Mar 08 03:35:52.596609 master-0 kubenswrapper[33141]: I0308 03:35:52.596474 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 03:35:52.598621 master-0 kubenswrapper[33141]: I0308 03:35:52.598597 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.600434 master-0 kubenswrapper[33141]: I0308 03:35:52.600389 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 08 03:35:52.605882 master-0 kubenswrapper[33141]: I0308 03:35:52.605855 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-ejf3rfa26fkl2" Mar 08 03:35:52.606063 master-0 kubenswrapper[33141]: I0308 03:35:52.606032 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 08 03:35:52.606213 master-0 kubenswrapper[33141]: I0308 03:35:52.606190 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 08 03:35:52.606313 master-0 kubenswrapper[33141]: I0308 03:35:52.606295 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 08 03:35:52.606469 master-0 kubenswrapper[33141]: I0308 03:35:52.606448 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 08 03:35:52.606615 master-0 kubenswrapper[33141]: I0308 03:35:52.606585 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 08 03:35:52.606693 master-0 kubenswrapper[33141]: I0308 03:35:52.606652 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 08 03:35:52.606737 master-0 kubenswrapper[33141]: I0308 03:35:52.606707 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 08 03:35:52.607019 master-0 kubenswrapper[33141]: I0308 03:35:52.606999 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 08 03:35:52.610025 master-0 kubenswrapper[33141]: I0308 03:35:52.609949 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 08 03:35:52.618377 master-0 kubenswrapper[33141]: I0308 03:35:52.618339 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 08 03:35:52.640394 master-0 kubenswrapper[33141]: I0308 03:35:52.640324 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 03:35:52.710399 master-0 kubenswrapper[33141]: I0308 03:35:52.710325 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710399 master-0 kubenswrapper[33141]: I0308 03:35:52.710392 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710623 master-0 kubenswrapper[33141]: I0308 03:35:52.710435 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710623 master-0 kubenswrapper[33141]: I0308 03:35:52.710468 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710623 master-0 kubenswrapper[33141]: I0308 03:35:52.710545 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710623 master-0 kubenswrapper[33141]: I0308 03:35:52.710567 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710623 master-0 kubenswrapper[33141]: I0308 03:35:52.710625 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710801 master-0 kubenswrapper[33141]: I0308 03:35:52.710694 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710801 master-0 kubenswrapper[33141]: I0308 03:35:52.710776 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cxjw\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-kube-api-access-7cxjw\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710872 master-0 kubenswrapper[33141]: I0308 03:35:52.710803 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710872 master-0 kubenswrapper[33141]: I0308 03:35:52.710824 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710872 master-0 kubenswrapper[33141]: I0308 03:35:52.710864 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710997 master-0 kubenswrapper[33141]: I0308 03:35:52.710889 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-web-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710997 master-0 kubenswrapper[33141]: I0308 03:35:52.710938 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.710997 master-0 kubenswrapper[33141]: I0308 03:35:52.710958 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.711088 master-0 kubenswrapper[33141]: I0308 03:35:52.711000 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config-out\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.711088 master-0 kubenswrapper[33141]: I0308 03:35:52.711020 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.711088 master-0 kubenswrapper[33141]: I0308 03:35:52.711039 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813160 master-0 kubenswrapper[33141]: I0308 03:35:52.812373 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813160 master-0 kubenswrapper[33141]: I0308 03:35:52.812587 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813578 master-0 kubenswrapper[33141]: I0308 03:35:52.813535 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813691 master-0 kubenswrapper[33141]: I0308 03:35:52.813659 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813763 master-0 kubenswrapper[33141]: I0308 03:35:52.813673 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.813864 master-0 kubenswrapper[33141]: I0308 03:35:52.813851 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814005 master-0 kubenswrapper[33141]: I0308 03:35:52.813991 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814123 master-0 kubenswrapper[33141]: I0308 03:35:52.814109 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cxjw\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-kube-api-access-7cxjw\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814202 master-0 kubenswrapper[33141]: I0308 03:35:52.814190 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814279 master-0 kubenswrapper[33141]: I0308 03:35:52.814267 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814394 master-0 kubenswrapper[33141]: I0308 03:35:52.814372 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814492 master-0 kubenswrapper[33141]: I0308 03:35:52.814474 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-web-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814595 master-0 kubenswrapper[33141]: I0308 03:35:52.814578 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814693 master-0 kubenswrapper[33141]: I0308 03:35:52.814673 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814855 master-0 kubenswrapper[33141]: I0308 03:35:52.814828 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.814942 master-0 kubenswrapper[33141]: I0308 03:35:52.814915 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config-out\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.817401 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.817475 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.817516 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.817576 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.817607 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.818945 master-0 kubenswrapper[33141]: I0308 03:35:52.818278 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.822944 master-0 kubenswrapper[33141]: I0308 03:35:52.820467 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.822944 master-0 kubenswrapper[33141]: I0308 03:35:52.821201 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.822944 master-0 kubenswrapper[33141]: I0308 03:35:52.821961 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config-out\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.823078 master-0 kubenswrapper[33141]: I0308 03:35:52.823037 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.823733 master-0 kubenswrapper[33141]: I0308 03:35:52.823706 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.827259 master-0 kubenswrapper[33141]: I0308 03:35:52.824009 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.827259 master-0 kubenswrapper[33141]: I0308 03:35:52.824519 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-web-config\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.827259 master-0 kubenswrapper[33141]: I0308 03:35:52.825436 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.828153 master-0 kubenswrapper[33141]: I0308 03:35:52.828103 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.830080 master-0 kubenswrapper[33141]: I0308 03:35:52.828203 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.830615 master-0 kubenswrapper[33141]: I0308 03:35:52.830565 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.830773 master-0 kubenswrapper[33141]: I0308 03:35:52.830739 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.833954 master-0 kubenswrapper[33141]: I0308 03:35:52.833894 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.837966 master-0 kubenswrapper[33141]: I0308 03:35:52.837923 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cxjw\" (UniqueName: \"kubernetes.io/projected/100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd-kube-api-access-7cxjw\") pod \"prometheus-k8s-0\" (UID: \"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:52.925507 master-0 kubenswrapper[33141]: I0308 03:35:52.925377 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:35:53.197145 master-0 kubenswrapper[33141]: I0308 03:35:53.196323 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" event={"ID":"6701b05d-5128-437f-9c1c-6fbbf80d5db8","Type":"ContainerStarted","Data":"393792fc69d2283e702f789bb306133fba3e1e9be97a011bd762a921f5993049"} Mar 08 03:35:53.214281 master-0 kubenswrapper[33141]: I0308 03:35:53.211543 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:53.214281 master-0 kubenswrapper[33141]: I0308 03:35:53.212616 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"91cd57c67a7845fd972afebc6947568359350e7b37fa7a2559de8c212f68aa2f"} Mar 08 03:35:53.214281 master-0 kubenswrapper[33141]: I0308 03:35:53.212672 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"3c80cfa9bf343f91692276c64a6a1648f6d3789ce2ebe89ef28e83374294b536"} Mar 08 03:35:53.230899 master-0 kubenswrapper[33141]: I0308 03:35:53.230861 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:53.238485 master-0 kubenswrapper[33141]: I0308 03:35:53.238386 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" podStartSLOduration=3.238365768 podStartE2EDuration="3.238365768s" podCreationTimestamp="2026-03-08 03:35:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:35:53.230094472 +0000 UTC m=+267.099987675" watchObservedRunningTime="2026-03-08 03:35:53.238365768 +0000 UTC m=+267.108258961" Mar 08 03:35:53.333474 master-0 kubenswrapper[33141]: I0308 03:35:53.333430 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.333474 master-0 kubenswrapper[33141]: I0308 03:35:53.333483 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333507 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333542 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333589 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl2kw\" (UniqueName: \"kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333670 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333739 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333785 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333810 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333885 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334000 master-0 kubenswrapper[33141]: I0308 03:35:53.333963 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334666 master-0 kubenswrapper[33141]: I0308 03:35:53.334028 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:53.334839 master-0 kubenswrapper[33141]: I0308 03:35:53.334806 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:53.336210 master-0 kubenswrapper[33141]: I0308 03:35:53.336168 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:53.336376 master-0 kubenswrapper[33141]: I0308 03:35:53.336331 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:53.336597 master-0 kubenswrapper[33141]: I0308 03:35:53.336554 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:35:53.338805 master-0 kubenswrapper[33141]: I0308 03:35:53.337421 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.339188 master-0 kubenswrapper[33141]: I0308 03:35:53.339133 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw" (OuterVolumeSpecName: "kube-api-access-vl2kw") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "kube-api-access-vl2kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:35:53.341889 master-0 kubenswrapper[33141]: I0308 03:35:53.341777 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.342183 master-0 kubenswrapper[33141]: I0308 03:35:53.342153 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.342398 master-0 kubenswrapper[33141]: I0308 03:35:53.342350 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.342465 master-0 kubenswrapper[33141]: I0308 03:35:53.342442 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.343465 master-0 kubenswrapper[33141]: I0308 03:35:53.343435 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.345824 master-0 kubenswrapper[33141]: I0308 03:35:53.345782 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:35:53.435947 master-0 kubenswrapper[33141]: I0308 03:35:53.435881 33141 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.435947 master-0 kubenswrapper[33141]: I0308 03:35:53.435938 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.435947 master-0 kubenswrapper[33141]: I0308 03:35:53.435950 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.435963 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.435975 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.435985 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.435999 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl2kw\" (UniqueName: \"kubernetes.io/projected/dd43b0a4-2149-4ae0-8493-de3dc307b334-kube-api-access-vl2kw\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.436007 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.436016 33141 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.436025 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.436035 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.436216 master-0 kubenswrapper[33141]: I0308 03:35:53.436044 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:53.583631 master-0 kubenswrapper[33141]: I0308 03:35:53.583568 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:35:53.583853 master-0 kubenswrapper[33141]: I0308 03:35:53.583646 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:35:53.591120 master-0 kubenswrapper[33141]: I0308 03:35:53.591078 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 08 03:35:54.043182 master-0 kubenswrapper[33141]: I0308 03:35:54.043140 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:54.044027 master-0 kubenswrapper[33141]: I0308 03:35:54.043979 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5d54745774-gwhkw\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:54.144224 master-0 kubenswrapper[33141]: I0308 03:35:54.144151 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") pod \"dd43b0a4-2149-4ae0-8493-de3dc307b334\" (UID: \"dd43b0a4-2149-4ae0-8493-de3dc307b334\") " Mar 08 03:35:54.145042 master-0 kubenswrapper[33141]: I0308 03:35:54.145020 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "dd43b0a4-2149-4ae0-8493-de3dc307b334" (UID: "dd43b0a4-2149-4ae0-8493-de3dc307b334"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:35:54.228677 master-0 kubenswrapper[33141]: I0308 03:35:54.228547 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"cbd6f132-3aa4-4114-9a59-e69aafa4cd1d","Type":"ContainerStarted","Data":"3b25a4323c61bad2382a4b6801b03779d9b7379eb6ba958feec5cec46a000382"} Mar 08 03:35:54.237257 master-0 kubenswrapper[33141]: I0308 03:35:54.237170 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"26b434b218a179cf33991f60ede505560341ff08fad479c421aeaf60dc42c5d5"} Mar 08 03:35:54.237489 master-0 kubenswrapper[33141]: I0308 03:35:54.237270 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"c64a5aeff803e1a5a4031fc9dd93d0f06e7ed015c5ca81c69415c104e29db43a"} Mar 08 03:35:54.237489 master-0 kubenswrapper[33141]: I0308 03:35:54.237294 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" event={"ID":"e26c5ed4-e811-4efd-a607-41e0953c1d8a","Type":"ContainerStarted","Data":"334e3ac6b5403bb4afe7c606894dae44ae0a77273a4ff4773a18d2c8474df0fa"} Mar 08 03:35:54.237641 master-0 kubenswrapper[33141]: I0308 03:35:54.237468 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:54.240914 master-0 kubenswrapper[33141]: I0308 03:35:54.240836 33141 generic.go:334] "Generic (PLEG): container finished" podID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" containerID="08a7f45d53d84394ecafa806e7949720310aefadea28519848cac78c3b7e540c" exitCode=0 Mar 08 03:35:54.240914 master-0 kubenswrapper[33141]: I0308 03:35:54.240967 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5d54745774-gwhkw" Mar 08 03:35:54.241238 master-0 kubenswrapper[33141]: I0308 03:35:54.240968 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerDied","Data":"08a7f45d53d84394ecafa806e7949720310aefadea28519848cac78c3b7e540c"} Mar 08 03:35:54.241238 master-0 kubenswrapper[33141]: I0308 03:35:54.241048 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"5ef5269a90fc71295371e4c7e3b9331c7dc282c3d085abb4b48cf7496f234761"} Mar 08 03:35:54.246057 master-0 kubenswrapper[33141]: I0308 03:35:54.246007 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/dd43b0a4-2149-4ae0-8493-de3dc307b334-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 08 03:35:54.285608 master-0 kubenswrapper[33141]: I0308 03:35:54.285440 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.228273485 podStartE2EDuration="7.285402973s" podCreationTimestamp="2026-03-08 03:35:47 +0000 UTC" firstStartedPulling="2026-03-08 03:35:49.155663232 +0000 UTC m=+263.025556425" lastFinishedPulling="2026-03-08 03:35:53.21279272 +0000 UTC m=+267.082685913" observedRunningTime="2026-03-08 03:35:54.276860899 +0000 UTC m=+268.146754102" watchObservedRunningTime="2026-03-08 03:35:54.285402973 +0000 UTC m=+268.155296246" Mar 08 03:35:54.387307 master-0 kubenswrapper[33141]: I0308 03:35:54.387188 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5d54745774-gwhkw"] Mar 08 03:35:54.397736 master-0 kubenswrapper[33141]: I0308 03:35:54.397675 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:35:54.399761 master-0 kubenswrapper[33141]: I0308 03:35:54.399328 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.403583 master-0 kubenswrapper[33141]: I0308 03:35:54.403550 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 03:35:54.404473 master-0 kubenswrapper[33141]: I0308 03:35:54.403674 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8hcmx" Mar 08 03:35:54.404559 master-0 kubenswrapper[33141]: I0308 03:35:54.404261 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-5d54745774-gwhkw"] Mar 08 03:35:54.404559 master-0 kubenswrapper[33141]: I0308 03:35:54.403720 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 03:35:54.404665 master-0 kubenswrapper[33141]: I0308 03:35:54.403760 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 03:35:54.404665 master-0 kubenswrapper[33141]: I0308 03:35:54.404125 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 03:35:54.404665 master-0 kubenswrapper[33141]: I0308 03:35:54.404144 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 03:35:54.404785 master-0 kubenswrapper[33141]: I0308 03:35:54.404239 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 03:35:54.404785 master-0 kubenswrapper[33141]: I0308 03:35:54.404252 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 03:35:54.404785 master-0 kubenswrapper[33141]: I0308 03:35:54.404286 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 03:35:54.405254 master-0 kubenswrapper[33141]: I0308 03:35:54.405231 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 03:35:54.405475 master-0 kubenswrapper[33141]: I0308 03:35:54.405457 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 03:35:54.405588 master-0 kubenswrapper[33141]: I0308 03:35:54.405573 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 03:35:54.414182 master-0 kubenswrapper[33141]: I0308 03:35:54.414142 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:35:54.432979 master-0 kubenswrapper[33141]: I0308 03:35:54.432826 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" podStartSLOduration=2.313968515 podStartE2EDuration="6.432770714s" podCreationTimestamp="2026-03-08 03:35:48 +0000 UTC" firstStartedPulling="2026-03-08 03:35:49.093059306 +0000 UTC m=+262.962952499" lastFinishedPulling="2026-03-08 03:35:53.211861515 +0000 UTC m=+267.081754698" observedRunningTime="2026-03-08 03:35:54.395575982 +0000 UTC m=+268.265469195" watchObservedRunningTime="2026-03-08 03:35:54.432770714 +0000 UTC m=+268.302663917" Mar 08 03:35:54.437433 master-0 kubenswrapper[33141]: I0308 03:35:54.437378 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 03:35:54.441250 master-0 kubenswrapper[33141]: I0308 03:35:54.441205 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 03:35:54.451931 master-0 kubenswrapper[33141]: I0308 03:35:54.451866 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452051 master-0 kubenswrapper[33141]: I0308 03:35:54.451955 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452051 master-0 kubenswrapper[33141]: I0308 03:35:54.452014 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452051 master-0 kubenswrapper[33141]: I0308 03:35:54.452046 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452156 master-0 kubenswrapper[33141]: I0308 03:35:54.452097 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdt5q\" (UniqueName: \"kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452156 master-0 kubenswrapper[33141]: I0308 03:35:54.452115 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452156 master-0 kubenswrapper[33141]: I0308 03:35:54.452137 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452156 master-0 kubenswrapper[33141]: I0308 03:35:54.452155 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452280 master-0 kubenswrapper[33141]: I0308 03:35:54.452206 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452280 master-0 kubenswrapper[33141]: I0308 03:35:54.452231 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452280 master-0 kubenswrapper[33141]: I0308 03:35:54.452250 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452451 master-0 kubenswrapper[33141]: I0308 03:35:54.452375 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.452451 master-0 kubenswrapper[33141]: I0308 03:35:54.452437 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.554858 master-0 kubenswrapper[33141]: I0308 03:35:54.554760 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdt5q\" (UniqueName: \"kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.554858 master-0 kubenswrapper[33141]: I0308 03:35:54.554851 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555226 master-0 kubenswrapper[33141]: I0308 03:35:54.554907 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555226 master-0 kubenswrapper[33141]: I0308 03:35:54.554962 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555226 master-0 kubenswrapper[33141]: I0308 03:35:54.555085 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555434 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555489 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555567 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555596 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555652 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555695 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.555764 master-0 kubenswrapper[33141]: I0308 03:35:54.555771 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.556262 master-0 kubenswrapper[33141]: I0308 03:35:54.555807 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.556502 master-0 kubenswrapper[33141]: I0308 03:35:54.556470 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.556666 master-0 kubenswrapper[33141]: I0308 03:35:54.556603 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.556954 master-0 kubenswrapper[33141]: I0308 03:35:54.556853 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.558650 master-0 kubenswrapper[33141]: I0308 03:35:54.558610 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.560353 master-0 kubenswrapper[33141]: I0308 03:35:54.560312 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.560566 master-0 kubenswrapper[33141]: I0308 03:35:54.560409 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.560701 master-0 kubenswrapper[33141]: I0308 03:35:54.560666 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.561601 master-0 kubenswrapper[33141]: I0308 03:35:54.561523 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.562321 master-0 kubenswrapper[33141]: I0308 03:35:54.562275 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.562548 master-0 kubenswrapper[33141]: I0308 03:35:54.562509 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.563414 master-0 kubenswrapper[33141]: I0308 03:35:54.563359 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.566476 master-0 kubenswrapper[33141]: I0308 03:35:54.566408 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.577047 master-0 kubenswrapper[33141]: I0308 03:35:54.576975 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdt5q\" (UniqueName: \"kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q\") pod \"oauth-openshift-69cd7f769d-d4snc\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:54.612001 master-0 kubenswrapper[33141]: I0308 03:35:54.611875 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:35:54.612361 master-0 kubenswrapper[33141]: I0308 03:35:54.612024 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:35:54.742659 master-0 kubenswrapper[33141]: I0308 03:35:54.742428 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:55.113601 master-0 kubenswrapper[33141]: I0308 03:35:55.113519 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:35:55.251176 master-0 kubenswrapper[33141]: I0308 03:35:55.251063 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" event={"ID":"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7","Type":"ContainerStarted","Data":"e3ff29c4ef4dd18cb5388cb182aaef1fc7f4b9366f5e90023f59d355481efa43"} Mar 08 03:35:55.815818 master-0 kubenswrapper[33141]: I0308 03:35:55.815741 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 08 03:35:55.816758 master-0 kubenswrapper[33141]: I0308 03:35:55.816713 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.821251 master-0 kubenswrapper[33141]: I0308 03:35:55.820987 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 08 03:35:55.821617 master-0 kubenswrapper[33141]: I0308 03:35:55.821404 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-mggqh" Mar 08 03:35:55.837013 master-0 kubenswrapper[33141]: I0308 03:35:55.833626 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 08 03:35:55.882642 master-0 kubenswrapper[33141]: I0308 03:35:55.882558 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.882873 master-0 kubenswrapper[33141]: I0308 03:35:55.882673 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.882873 master-0 kubenswrapper[33141]: I0308 03:35:55.882705 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.986531 master-0 kubenswrapper[33141]: I0308 03:35:55.985012 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.986531 master-0 kubenswrapper[33141]: I0308 03:35:55.985222 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.986531 master-0 kubenswrapper[33141]: I0308 03:35:55.985281 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.986531 master-0 kubenswrapper[33141]: I0308 03:35:55.985320 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:55.986531 master-0 kubenswrapper[33141]: I0308 03:35:55.985323 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:56.007395 master-0 kubenswrapper[33141]: I0308 03:35:56.007336 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access\") pod \"installer-5-master-0\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:56.148974 master-0 kubenswrapper[33141]: I0308 03:35:56.148789 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:35:56.367393 master-0 kubenswrapper[33141]: I0308 03:35:56.367318 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd43b0a4-2149-4ae0-8493-de3dc307b334" path="/var/lib/kubelet/pods/dd43b0a4-2149-4ae0-8493-de3dc307b334/volumes" Mar 08 03:35:56.644053 master-0 kubenswrapper[33141]: I0308 03:35:56.643985 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 08 03:35:57.278146 master-0 kubenswrapper[33141]: I0308 03:35:57.277991 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"89044116-4d25-4312-9475-c92acd031a98","Type":"ContainerStarted","Data":"27dc48b27fc15373c0f1525c8be7959ace65381f5bff90c8a7ee825b430a2ddb"} Mar 08 03:35:58.270944 master-0 kubenswrapper[33141]: I0308 03:35:58.265307 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:35:58.616433 master-0 kubenswrapper[33141]: I0308 03:35:58.616365 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-cc54f9d45-86rbf" Mar 08 03:35:59.324126 master-0 kubenswrapper[33141]: I0308 03:35:59.324060 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"89044116-4d25-4312-9475-c92acd031a98","Type":"ContainerStarted","Data":"d4f13c089c34b1b5bbecf2b13942276134cb0af95228897598100551ea1b70a6"} Mar 08 03:35:59.344223 master-0 kubenswrapper[33141]: I0308 03:35:59.344166 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"aeb5060f36d967a24bda814702ca50ba16608a68f8fd72e296fc70f7ad24dc55"} Mar 08 03:35:59.344223 master-0 kubenswrapper[33141]: I0308 03:35:59.344223 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"b6c0e9e0950c762d93f244064be415804a273d781ae1365309aa409476c29924"} Mar 08 03:35:59.344428 master-0 kubenswrapper[33141]: I0308 03:35:59.344234 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"f249f54078101b1c387ddcc37327b2adbf7294331a2911c443a9d14838cfed2b"} Mar 08 03:35:59.360959 master-0 kubenswrapper[33141]: I0308 03:35:59.360793 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=4.360775723 podStartE2EDuration="4.360775723s" podCreationTimestamp="2026-03-08 03:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:35:59.360208208 +0000 UTC m=+273.230101401" watchObservedRunningTime="2026-03-08 03:35:59.360775723 +0000 UTC m=+273.230668906" Mar 08 03:35:59.365274 master-0 kubenswrapper[33141]: I0308 03:35:59.365202 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" event={"ID":"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7","Type":"ContainerStarted","Data":"eb08ddfeac71fa4dcfed543afe0bf2207a1606f8fe6af5f9e3a236b0fe7e58f4"} Mar 08 03:35:59.365753 master-0 kubenswrapper[33141]: I0308 03:35:59.365707 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:35:59.409540 master-0 kubenswrapper[33141]: I0308 03:35:59.408642 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" podStartSLOduration=3.8518221219999997 podStartE2EDuration="7.408622073s" podCreationTimestamp="2026-03-08 03:35:52 +0000 UTC" firstStartedPulling="2026-03-08 03:35:55.121842754 +0000 UTC m=+268.991735987" lastFinishedPulling="2026-03-08 03:35:58.678642745 +0000 UTC m=+272.548535938" observedRunningTime="2026-03-08 03:35:59.406470857 +0000 UTC m=+273.276364060" watchObservedRunningTime="2026-03-08 03:35:59.408622073 +0000 UTC m=+273.278515266" Mar 08 03:35:59.705255 master-0 kubenswrapper[33141]: I0308 03:35:59.705197 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:36:00.376688 master-0 kubenswrapper[33141]: I0308 03:36:00.376607 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"2fbd288ee0c643251e4eba117f75447449970e8883038ad77c559f7203d28e83"} Mar 08 03:36:00.377175 master-0 kubenswrapper[33141]: I0308 03:36:00.376696 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"01b0692feebaf2187b0550d09f2c2165da749a63bda3595639e7c73a4f66e427"} Mar 08 03:36:00.377175 master-0 kubenswrapper[33141]: I0308 03:36:00.376721 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd","Type":"ContainerStarted","Data":"a4521cfef3bff443eebcd058def837fab754dfe54c8fce6fadf07b51c3ebc210"} Mar 08 03:36:00.413625 master-0 kubenswrapper[33141]: I0308 03:36:00.413508 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=3.965041583 podStartE2EDuration="8.413483247s" podCreationTimestamp="2026-03-08 03:35:52 +0000 UTC" firstStartedPulling="2026-03-08 03:35:54.245043518 +0000 UTC m=+268.114936751" lastFinishedPulling="2026-03-08 03:35:58.693485222 +0000 UTC m=+272.563378415" observedRunningTime="2026-03-08 03:36:00.412952203 +0000 UTC m=+274.282845436" watchObservedRunningTime="2026-03-08 03:36:00.413483247 +0000 UTC m=+274.283376470" Mar 08 03:36:02.927011 master-0 kubenswrapper[33141]: I0308 03:36:02.926896 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:36:03.577748 master-0 kubenswrapper[33141]: I0308 03:36:03.577640 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:03.578078 master-0 kubenswrapper[33141]: I0308 03:36:03.577756 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:04.611570 master-0 kubenswrapper[33141]: I0308 03:36:04.611419 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:04.612409 master-0 kubenswrapper[33141]: I0308 03:36:04.611555 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:11.306029 master-0 kubenswrapper[33141]: I0308 03:36:11.305893 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:36:11.306029 master-0 kubenswrapper[33141]: I0308 03:36:11.306033 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:36:11.510275 master-0 kubenswrapper[33141]: I0308 03:36:11.510180 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 08 03:36:11.514260 master-0 kubenswrapper[33141]: I0308 03:36:11.514180 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.521654 master-0 kubenswrapper[33141]: I0308 03:36:11.521552 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-sglg6" Mar 08 03:36:11.527076 master-0 kubenswrapper[33141]: I0308 03:36:11.526999 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 03:36:11.553178 master-0 kubenswrapper[33141]: I0308 03:36:11.553097 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 08 03:36:11.577850 master-0 kubenswrapper[33141]: I0308 03:36:11.577690 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.577850 master-0 kubenswrapper[33141]: I0308 03:36:11.577804 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.578104 master-0 kubenswrapper[33141]: I0308 03:36:11.577893 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.680551 master-0 kubenswrapper[33141]: I0308 03:36:11.680442 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.680805 master-0 kubenswrapper[33141]: I0308 03:36:11.680578 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.680805 master-0 kubenswrapper[33141]: I0308 03:36:11.680687 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.681496 master-0 kubenswrapper[33141]: I0308 03:36:11.681445 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.681559 master-0 kubenswrapper[33141]: I0308 03:36:11.681530 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.720076 master-0 kubenswrapper[33141]: I0308 03:36:11.720022 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access\") pod \"installer-6-master-0\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:11.854798 master-0 kubenswrapper[33141]: I0308 03:36:11.854676 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:12.383425 master-0 kubenswrapper[33141]: W0308 03:36:12.383334 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfd1a6545_ecae_4ade_a3ba_8d7b0d469f0f.slice/crio-4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4 WatchSource:0}: Error finding container 4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4: Status 404 returned error can't find the container with id 4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4 Mar 08 03:36:12.394203 master-0 kubenswrapper[33141]: I0308 03:36:12.393971 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 08 03:36:12.497568 master-0 kubenswrapper[33141]: I0308 03:36:12.497519 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f","Type":"ContainerStarted","Data":"4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4"} Mar 08 03:36:13.515495 master-0 kubenswrapper[33141]: I0308 03:36:13.515396 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f","Type":"ContainerStarted","Data":"6e5c4d7b7c2f3383367ed91c12e476d8cf762501448166e101db74c453828781"} Mar 08 03:36:13.550133 master-0 kubenswrapper[33141]: I0308 03:36:13.546784 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.546749867 podStartE2EDuration="2.546749867s" podCreationTimestamp="2026-03-08 03:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:36:13.543596945 +0000 UTC m=+287.413490198" watchObservedRunningTime="2026-03-08 03:36:13.546749867 +0000 UTC m=+287.416643110" Mar 08 03:36:13.577716 master-0 kubenswrapper[33141]: I0308 03:36:13.577622 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:13.577967 master-0 kubenswrapper[33141]: I0308 03:36:13.577732 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:14.611493 master-0 kubenswrapper[33141]: I0308 03:36:14.611397 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:14.612465 master-0 kubenswrapper[33141]: I0308 03:36:14.611501 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:23.577149 master-0 kubenswrapper[33141]: I0308 03:36:23.577052 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:23.578211 master-0 kubenswrapper[33141]: I0308 03:36:23.577153 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:24.398740 master-0 kubenswrapper[33141]: I0308 03:36:24.398567 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerName="oauth-openshift" containerID="cri-o://eb08ddfeac71fa4dcfed543afe0bf2207a1606f8fe6af5f9e3a236b0fe7e58f4" gracePeriod=15 Mar 08 03:36:24.610342 master-0 kubenswrapper[33141]: I0308 03:36:24.610187 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:24.610342 master-0 kubenswrapper[33141]: I0308 03:36:24.610320 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:24.627186 master-0 kubenswrapper[33141]: I0308 03:36:24.627097 33141 generic.go:334] "Generic (PLEG): container finished" podID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerID="eb08ddfeac71fa4dcfed543afe0bf2207a1606f8fe6af5f9e3a236b0fe7e58f4" exitCode=0 Mar 08 03:36:24.627458 master-0 kubenswrapper[33141]: I0308 03:36:24.627198 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" event={"ID":"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7","Type":"ContainerDied","Data":"eb08ddfeac71fa4dcfed543afe0bf2207a1606f8fe6af5f9e3a236b0fe7e58f4"} Mar 08 03:36:24.952494 master-0 kubenswrapper[33141]: I0308 03:36:24.952449 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:36:24.997966 master-0 kubenswrapper[33141]: I0308 03:36:24.997886 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-695cdc494-nz9mf"] Mar 08 03:36:24.998221 master-0 kubenswrapper[33141]: E0308 03:36:24.998185 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerName="oauth-openshift" Mar 08 03:36:24.998221 master-0 kubenswrapper[33141]: I0308 03:36:24.998198 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerName="oauth-openshift" Mar 08 03:36:24.998355 master-0 kubenswrapper[33141]: I0308 03:36:24.998347 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerName="oauth-openshift" Mar 08 03:36:25.002086 master-0 kubenswrapper[33141]: I0308 03:36:24.998842 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.026413 master-0 kubenswrapper[33141]: I0308 03:36:25.026343 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-695cdc494-nz9mf"] Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042045 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042118 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042144 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042185 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042266 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042307 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042336 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042364 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042422 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042445 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdt5q\" (UniqueName: \"kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042471 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042513 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043071 master-0 kubenswrapper[33141]: I0308 03:36:25.042546 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies\") pod \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\" (UID: \"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7\") " Mar 08 03:36:25.043837 master-0 kubenswrapper[33141]: I0308 03:36:25.043798 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:36:25.045046 master-0 kubenswrapper[33141]: I0308 03:36:25.044662 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:25.045388 master-0 kubenswrapper[33141]: I0308 03:36:25.045325 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:36:25.045450 master-0 kubenswrapper[33141]: I0308 03:36:25.045368 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:36:25.045450 master-0 kubenswrapper[33141]: I0308 03:36:25.045398 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:36:25.047187 master-0 kubenswrapper[33141]: I0308 03:36:25.047150 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.047593 master-0 kubenswrapper[33141]: I0308 03:36:25.047553 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.047845 master-0 kubenswrapper[33141]: I0308 03:36:25.047808 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.048631 master-0 kubenswrapper[33141]: I0308 03:36:25.048589 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.049124 master-0 kubenswrapper[33141]: I0308 03:36:25.049063 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.050338 master-0 kubenswrapper[33141]: I0308 03:36:25.050262 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.053140 master-0 kubenswrapper[33141]: I0308 03:36:25.050835 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:36:25.053140 master-0 kubenswrapper[33141]: I0308 03:36:25.051292 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q" (OuterVolumeSpecName: "kube-api-access-fdt5q") pod "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" (UID: "f2af26ea-e5e5-44ad-a9dc-975ca775e7c7"). InnerVolumeSpecName "kube-api-access-fdt5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:36:25.144865 master-0 kubenswrapper[33141]: I0308 03:36:25.144735 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dccd938-f89c-48f9-aa32-761b3dead193-audit-dir\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.144865 master-0 kubenswrapper[33141]: I0308 03:36:25.144834 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-audit-policies\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145087 master-0 kubenswrapper[33141]: I0308 03:36:25.144890 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-login\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145087 master-0 kubenswrapper[33141]: I0308 03:36:25.144990 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-session\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145087 master-0 kubenswrapper[33141]: I0308 03:36:25.145044 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-router-certs\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145253 master-0 kubenswrapper[33141]: I0308 03:36:25.145086 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145253 master-0 kubenswrapper[33141]: I0308 03:36:25.145126 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-serving-cert\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145253 master-0 kubenswrapper[33141]: I0308 03:36:25.145171 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-cliconfig\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145253 master-0 kubenswrapper[33141]: I0308 03:36:25.145201 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-service-ca\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145391 master-0 kubenswrapper[33141]: I0308 03:36:25.145285 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145391 master-0 kubenswrapper[33141]: I0308 03:36:25.145328 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-error\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145454 master-0 kubenswrapper[33141]: I0308 03:36:25.145386 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145454 master-0 kubenswrapper[33141]: I0308 03:36:25.145420 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mncnj\" (UniqueName: \"kubernetes.io/projected/5dccd938-f89c-48f9-aa32-761b3dead193-kube-api-access-mncnj\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.145586 master-0 kubenswrapper[33141]: I0308 03:36:25.145550 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145625 master-0 kubenswrapper[33141]: I0308 03:36:25.145586 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145625 master-0 kubenswrapper[33141]: I0308 03:36:25.145607 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145688 master-0 kubenswrapper[33141]: I0308 03:36:25.145629 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145688 master-0 kubenswrapper[33141]: I0308 03:36:25.145650 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145688 master-0 kubenswrapper[33141]: I0308 03:36:25.145671 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdt5q\" (UniqueName: \"kubernetes.io/projected/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-kube-api-access-fdt5q\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145783 master-0 kubenswrapper[33141]: I0308 03:36:25.145691 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145783 master-0 kubenswrapper[33141]: I0308 03:36:25.145710 33141 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145783 master-0 kubenswrapper[33141]: I0308 03:36:25.145728 33141 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145783 master-0 kubenswrapper[33141]: I0308 03:36:25.145747 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145783 master-0 kubenswrapper[33141]: I0308 03:36:25.145766 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145939 master-0 kubenswrapper[33141]: I0308 03:36:25.145786 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.145939 master-0 kubenswrapper[33141]: I0308 03:36:25.145807 33141 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:25.247052 master-0 kubenswrapper[33141]: I0308 03:36:25.246969 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dccd938-f89c-48f9-aa32-761b3dead193-audit-dir\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247242 master-0 kubenswrapper[33141]: I0308 03:36:25.247085 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-audit-policies\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247242 master-0 kubenswrapper[33141]: I0308 03:36:25.247135 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-login\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247242 master-0 kubenswrapper[33141]: I0308 03:36:25.247188 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-session\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247242 master-0 kubenswrapper[33141]: I0308 03:36:25.247231 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-router-certs\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247366 master-0 kubenswrapper[33141]: I0308 03:36:25.247274 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247366 master-0 kubenswrapper[33141]: I0308 03:36:25.247320 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-serving-cert\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247428 master-0 kubenswrapper[33141]: I0308 03:36:25.247365 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-cliconfig\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247428 master-0 kubenswrapper[33141]: I0308 03:36:25.247396 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-service-ca\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247483 master-0 kubenswrapper[33141]: I0308 03:36:25.247455 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247523 master-0 kubenswrapper[33141]: I0308 03:36:25.247497 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-error\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247583 master-0 kubenswrapper[33141]: I0308 03:36:25.247555 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.247626 master-0 kubenswrapper[33141]: I0308 03:36:25.247597 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mncnj\" (UniqueName: \"kubernetes.io/projected/5dccd938-f89c-48f9-aa32-761b3dead193-kube-api-access-mncnj\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.248841 master-0 kubenswrapper[33141]: I0308 03:36:25.248803 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dccd938-f89c-48f9-aa32-761b3dead193-audit-dir\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.249448 master-0 kubenswrapper[33141]: I0308 03:36:25.249423 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-cliconfig\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.250248 master-0 kubenswrapper[33141]: I0308 03:36:25.250228 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-audit-policies\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.250511 master-0 kubenswrapper[33141]: I0308 03:36:25.250428 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-service-ca\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.250641 master-0 kubenswrapper[33141]: I0308 03:36:25.250593 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.252634 master-0 kubenswrapper[33141]: I0308 03:36:25.252590 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-session\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.254247 master-0 kubenswrapper[33141]: I0308 03:36:25.254212 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-router-certs\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.255332 master-0 kubenswrapper[33141]: I0308 03:36:25.255108 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-error\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.255508 master-0 kubenswrapper[33141]: I0308 03:36:25.255466 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.255772 master-0 kubenswrapper[33141]: I0308 03:36:25.255712 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.256463 master-0 kubenswrapper[33141]: I0308 03:36:25.256420 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-system-serving-cert\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.257021 master-0 kubenswrapper[33141]: I0308 03:36:25.256978 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5dccd938-f89c-48f9-aa32-761b3dead193-v4-0-config-user-template-login\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.273014 master-0 kubenswrapper[33141]: I0308 03:36:25.272945 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mncnj\" (UniqueName: \"kubernetes.io/projected/5dccd938-f89c-48f9-aa32-761b3dead193-kube-api-access-mncnj\") pod \"oauth-openshift-695cdc494-nz9mf\" (UID: \"5dccd938-f89c-48f9-aa32-761b3dead193\") " pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.331735 master-0 kubenswrapper[33141]: I0308 03:36:25.331621 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:25.646886 master-0 kubenswrapper[33141]: I0308 03:36:25.640948 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" event={"ID":"f2af26ea-e5e5-44ad-a9dc-975ca775e7c7","Type":"ContainerDied","Data":"e3ff29c4ef4dd18cb5388cb182aaef1fc7f4b9366f5e90023f59d355481efa43"} Mar 08 03:36:25.646886 master-0 kubenswrapper[33141]: I0308 03:36:25.641021 33141 scope.go:117] "RemoveContainer" containerID="eb08ddfeac71fa4dcfed543afe0bf2207a1606f8fe6af5f9e3a236b0fe7e58f4" Mar 08 03:36:25.646886 master-0 kubenswrapper[33141]: I0308 03:36:25.641026 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" Mar 08 03:36:25.693583 master-0 kubenswrapper[33141]: I0308 03:36:25.693509 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:36:25.700052 master-0 kubenswrapper[33141]: I0308 03:36:25.699861 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-69cd7f769d-d4snc"] Mar 08 03:36:25.743351 master-0 kubenswrapper[33141]: I0308 03:36:25.743267 33141 patch_prober.go:28] interesting pod/oauth-openshift-69cd7f769d-d4snc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.104:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:36:25.743351 master-0 kubenswrapper[33141]: I0308 03:36:25.743340 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-69cd7f769d-d4snc" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.104:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:36:25.859237 master-0 kubenswrapper[33141]: I0308 03:36:25.859152 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-695cdc494-nz9mf"] Mar 08 03:36:26.340389 master-0 kubenswrapper[33141]: I0308 03:36:26.340206 33141 kubelet.go:1505] "Image garbage collection succeeded" Mar 08 03:36:26.366534 master-0 kubenswrapper[33141]: I0308 03:36:26.366486 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2af26ea-e5e5-44ad-a9dc-975ca775e7c7" path="/var/lib/kubelet/pods/f2af26ea-e5e5-44ad-a9dc-975ca775e7c7/volumes" Mar 08 03:36:26.663238 master-0 kubenswrapper[33141]: I0308 03:36:26.661402 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" event={"ID":"5dccd938-f89c-48f9-aa32-761b3dead193","Type":"ContainerStarted","Data":"db66894e4bdaf296d20f8b59b02cf64bc00f8458df8c8bb2f1aba8b194fb1a43"} Mar 08 03:36:26.663238 master-0 kubenswrapper[33141]: I0308 03:36:26.661511 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" event={"ID":"5dccd938-f89c-48f9-aa32-761b3dead193","Type":"ContainerStarted","Data":"4acfdcdec7e2fdc7bb5936a7e94f3c05ac4013c716fe7796e6aec0b30e68500a"} Mar 08 03:36:26.663238 master-0 kubenswrapper[33141]: I0308 03:36:26.662458 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:26.677556 master-0 kubenswrapper[33141]: I0308 03:36:26.677448 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" Mar 08 03:36:26.753404 master-0 kubenswrapper[33141]: I0308 03:36:26.753318 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-695cdc494-nz9mf" podStartSLOduration=28.753297937 podStartE2EDuration="28.753297937s" podCreationTimestamp="2026-03-08 03:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:36:26.713570339 +0000 UTC m=+300.583463592" watchObservedRunningTime="2026-03-08 03:36:26.753297937 +0000 UTC m=+300.623191130" Mar 08 03:36:29.983168 master-0 kubenswrapper[33141]: I0308 03:36:29.983090 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:36:29.984294 master-0 kubenswrapper[33141]: I0308 03:36:29.983463 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" gracePeriod=30 Mar 08 03:36:29.984294 master-0 kubenswrapper[33141]: I0308 03:36:29.983558 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" containerID="cri-o://368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" gracePeriod=30 Mar 08 03:36:29.984294 master-0 kubenswrapper[33141]: I0308 03:36:29.983562 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" gracePeriod=30 Mar 08 03:36:29.988305 master-0 kubenswrapper[33141]: I0308 03:36:29.988218 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:36:29.988804 master-0 kubenswrapper[33141]: E0308 03:36:29.988753 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.988804 master-0 kubenswrapper[33141]: I0308 03:36:29.988782 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.989047 master-0 kubenswrapper[33141]: E0308 03:36:29.988870 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 08 03:36:29.989520 master-0 kubenswrapper[33141]: I0308 03:36:29.988883 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 08 03:36:29.989612 master-0 kubenswrapper[33141]: E0308 03:36:29.989517 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:29.989612 master-0 kubenswrapper[33141]: I0308 03:36:29.989560 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:29.989612 master-0 kubenswrapper[33141]: E0308 03:36:29.989577 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:29.989612 master-0 kubenswrapper[33141]: I0308 03:36:29.989588 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:29.990064 master-0 kubenswrapper[33141]: E0308 03:36:29.989642 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 08 03:36:29.990064 master-0 kubenswrapper[33141]: I0308 03:36:29.989655 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 08 03:36:29.990538 master-0 kubenswrapper[33141]: I0308 03:36:29.990433 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:29.990654 master-0 kubenswrapper[33141]: I0308 03:36:29.990626 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 08 03:36:29.990733 master-0 kubenswrapper[33141]: I0308 03:36:29.990658 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.990871 master-0 kubenswrapper[33141]: I0308 03:36:29.990821 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.991248 master-0 kubenswrapper[33141]: E0308 03:36:29.991201 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.991248 master-0 kubenswrapper[33141]: I0308 03:36:29.991251 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 08 03:36:29.991594 master-0 kubenswrapper[33141]: I0308 03:36:29.991546 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 08 03:36:30.159718 master-0 kubenswrapper[33141]: I0308 03:36:30.159624 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/1.log" Mar 08 03:36:30.162111 master-0 kubenswrapper[33141]: I0308 03:36:30.162052 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 08 03:36:30.163172 master-0 kubenswrapper[33141]: I0308 03:36:30.163104 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:36:30.164116 master-0 kubenswrapper[33141]: I0308 03:36:30.164065 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.167833 master-0 kubenswrapper[33141]: I0308 03:36:30.167775 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 08 03:36:30.196640 master-0 kubenswrapper[33141]: I0308 03:36:30.196581 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.196808 master-0 kubenswrapper[33141]: I0308 03:36:30.196712 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.298390 master-0 kubenswrapper[33141]: I0308 03:36:30.298324 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 08 03:36:30.298600 master-0 kubenswrapper[33141]: I0308 03:36:30.298402 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 08 03:36:30.298600 master-0 kubenswrapper[33141]: I0308 03:36:30.298538 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:30.298720 master-0 kubenswrapper[33141]: I0308 03:36:30.298659 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:30.299011 master-0 kubenswrapper[33141]: I0308 03:36:30.298953 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.299250 master-0 kubenswrapper[33141]: I0308 03:36:30.299215 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.299301 master-0 kubenswrapper[33141]: I0308 03:36:30.299234 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.299347 master-0 kubenswrapper[33141]: I0308 03:36:30.299314 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:30.299347 master-0 kubenswrapper[33141]: I0308 03:36:30.299339 33141 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:30.299437 master-0 kubenswrapper[33141]: I0308 03:36:30.299393 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.373547 master-0 kubenswrapper[33141]: I0308 03:36:30.373454 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3d45b6ce1b3764f9927e623a71adf8" path="/var/lib/kubelet/pods/1d3d45b6ce1b3764f9927e623a71adf8/volumes" Mar 08 03:36:30.709609 master-0 kubenswrapper[33141]: I0308 03:36:30.709465 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/1.log" Mar 08 03:36:30.711978 master-0 kubenswrapper[33141]: I0308 03:36:30.711875 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 08 03:36:30.712798 master-0 kubenswrapper[33141]: I0308 03:36:30.712752 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 08 03:36:30.713568 master-0 kubenswrapper[33141]: I0308 03:36:30.713483 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" exitCode=2 Mar 08 03:36:30.713568 master-0 kubenswrapper[33141]: I0308 03:36:30.713555 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" exitCode=0 Mar 08 03:36:30.713757 master-0 kubenswrapper[33141]: I0308 03:36:30.713574 33141 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" exitCode=0 Mar 08 03:36:30.713757 master-0 kubenswrapper[33141]: I0308 03:36:30.713599 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:30.713757 master-0 kubenswrapper[33141]: I0308 03:36:30.713634 33141 scope.go:117] "RemoveContainer" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" Mar 08 03:36:30.717858 master-0 kubenswrapper[33141]: I0308 03:36:30.717457 33141 generic.go:334] "Generic (PLEG): container finished" podID="89044116-4d25-4312-9475-c92acd031a98" containerID="d4f13c089c34b1b5bbecf2b13942276134cb0af95228897598100551ea1b70a6" exitCode=0 Mar 08 03:36:30.717858 master-0 kubenswrapper[33141]: I0308 03:36:30.717512 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"89044116-4d25-4312-9475-c92acd031a98","Type":"ContainerDied","Data":"d4f13c089c34b1b5bbecf2b13942276134cb0af95228897598100551ea1b70a6"} Mar 08 03:36:30.719445 master-0 kubenswrapper[33141]: I0308 03:36:30.719366 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 08 03:36:30.743586 master-0 kubenswrapper[33141]: I0308 03:36:30.743213 33141 scope.go:117] "RemoveContainer" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" Mar 08 03:36:30.753332 master-0 kubenswrapper[33141]: I0308 03:36:30.753251 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 08 03:36:30.778118 master-0 kubenswrapper[33141]: I0308 03:36:30.778047 33141 scope.go:117] "RemoveContainer" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" Mar 08 03:36:30.810023 master-0 kubenswrapper[33141]: I0308 03:36:30.809827 33141 scope.go:117] "RemoveContainer" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:36:30.838791 master-0 kubenswrapper[33141]: I0308 03:36:30.838739 33141 scope.go:117] "RemoveContainer" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:36:30.870572 master-0 kubenswrapper[33141]: I0308 03:36:30.870520 33141 scope.go:117] "RemoveContainer" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" Mar 08 03:36:30.897352 master-0 kubenswrapper[33141]: I0308 03:36:30.897288 33141 scope.go:117] "RemoveContainer" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" Mar 08 03:36:30.897945 master-0 kubenswrapper[33141]: E0308 03:36:30.897859 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": container with ID starting with ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429 not found: ID does not exist" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" Mar 08 03:36:30.898038 master-0 kubenswrapper[33141]: I0308 03:36:30.897960 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429"} err="failed to get container status \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": rpc error: code = NotFound desc = could not find container \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": container with ID starting with ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429 not found: ID does not exist" Mar 08 03:36:30.898038 master-0 kubenswrapper[33141]: I0308 03:36:30.898002 33141 scope.go:117] "RemoveContainer" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" Mar 08 03:36:30.898703 master-0 kubenswrapper[33141]: E0308 03:36:30.898641 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": container with ID starting with 368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef not found: ID does not exist" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" Mar 08 03:36:30.898787 master-0 kubenswrapper[33141]: I0308 03:36:30.898725 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef"} err="failed to get container status \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": rpc error: code = NotFound desc = could not find container \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": container with ID starting with 368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef not found: ID does not exist" Mar 08 03:36:30.898787 master-0 kubenswrapper[33141]: I0308 03:36:30.898772 33141 scope.go:117] "RemoveContainer" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" Mar 08 03:36:30.899411 master-0 kubenswrapper[33141]: E0308 03:36:30.899374 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": container with ID starting with f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319 not found: ID does not exist" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" Mar 08 03:36:30.899495 master-0 kubenswrapper[33141]: I0308 03:36:30.899418 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319"} err="failed to get container status \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": rpc error: code = NotFound desc = could not find container \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": container with ID starting with f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319 not found: ID does not exist" Mar 08 03:36:30.899495 master-0 kubenswrapper[33141]: I0308 03:36:30.899450 33141 scope.go:117] "RemoveContainer" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:36:30.899955 master-0 kubenswrapper[33141]: E0308 03:36:30.899868 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": container with ID starting with 93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e not found: ID does not exist" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:36:30.900033 master-0 kubenswrapper[33141]: I0308 03:36:30.899973 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} err="failed to get container status \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": rpc error: code = NotFound desc = could not find container \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": container with ID starting with 93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e not found: ID does not exist" Mar 08 03:36:30.900033 master-0 kubenswrapper[33141]: I0308 03:36:30.900013 33141 scope.go:117] "RemoveContainer" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:36:30.900483 master-0 kubenswrapper[33141]: E0308 03:36:30.900437 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": container with ID starting with 1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530 not found: ID does not exist" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:36:30.900542 master-0 kubenswrapper[33141]: I0308 03:36:30.900497 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} err="failed to get container status \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": rpc error: code = NotFound desc = could not find container \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": container with ID starting with 1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530 not found: ID does not exist" Mar 08 03:36:30.900587 master-0 kubenswrapper[33141]: I0308 03:36:30.900537 33141 scope.go:117] "RemoveContainer" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" Mar 08 03:36:30.901288 master-0 kubenswrapper[33141]: E0308 03:36:30.901234 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": container with ID starting with a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422 not found: ID does not exist" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" Mar 08 03:36:30.901373 master-0 kubenswrapper[33141]: I0308 03:36:30.901293 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} err="failed to get container status \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": rpc error: code = NotFound desc = could not find container \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": container with ID starting with a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422 not found: ID does not exist" Mar 08 03:36:30.901373 master-0 kubenswrapper[33141]: I0308 03:36:30.901329 33141 scope.go:117] "RemoveContainer" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" Mar 08 03:36:30.901862 master-0 kubenswrapper[33141]: I0308 03:36:30.901735 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429"} err="failed to get container status \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": rpc error: code = NotFound desc = could not find container \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": container with ID starting with ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429 not found: ID does not exist" Mar 08 03:36:30.901862 master-0 kubenswrapper[33141]: I0308 03:36:30.901790 33141 scope.go:117] "RemoveContainer" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" Mar 08 03:36:30.902249 master-0 kubenswrapper[33141]: I0308 03:36:30.902190 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef"} err="failed to get container status \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": rpc error: code = NotFound desc = could not find container \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": container with ID starting with 368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef not found: ID does not exist" Mar 08 03:36:30.902316 master-0 kubenswrapper[33141]: I0308 03:36:30.902245 33141 scope.go:117] "RemoveContainer" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" Mar 08 03:36:30.902870 master-0 kubenswrapper[33141]: I0308 03:36:30.902773 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319"} err="failed to get container status \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": rpc error: code = NotFound desc = could not find container \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": container with ID starting with f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319 not found: ID does not exist" Mar 08 03:36:30.902870 master-0 kubenswrapper[33141]: I0308 03:36:30.902799 33141 scope.go:117] "RemoveContainer" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:36:30.903881 master-0 kubenswrapper[33141]: I0308 03:36:30.903328 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} err="failed to get container status \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": rpc error: code = NotFound desc = could not find container \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": container with ID starting with 93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e not found: ID does not exist" Mar 08 03:36:30.903881 master-0 kubenswrapper[33141]: I0308 03:36:30.903376 33141 scope.go:117] "RemoveContainer" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:36:30.903881 master-0 kubenswrapper[33141]: I0308 03:36:30.903706 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} err="failed to get container status \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": rpc error: code = NotFound desc = could not find container \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": container with ID starting with 1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530 not found: ID does not exist" Mar 08 03:36:30.903881 master-0 kubenswrapper[33141]: I0308 03:36:30.903742 33141 scope.go:117] "RemoveContainer" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" Mar 08 03:36:30.905461 master-0 kubenswrapper[33141]: I0308 03:36:30.904245 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} err="failed to get container status \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": rpc error: code = NotFound desc = could not find container \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": container with ID starting with a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422 not found: ID does not exist" Mar 08 03:36:30.905461 master-0 kubenswrapper[33141]: I0308 03:36:30.904270 33141 scope.go:117] "RemoveContainer" containerID="ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429" Mar 08 03:36:30.905461 master-0 kubenswrapper[33141]: I0308 03:36:30.905331 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429"} err="failed to get container status \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": rpc error: code = NotFound desc = could not find container \"ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429\": container with ID starting with ee449e7433c4a405cc6fedd37c99af30ee1c65aa3f79f3a17b1bfbdcb68f1429 not found: ID does not exist" Mar 08 03:36:30.905461 master-0 kubenswrapper[33141]: I0308 03:36:30.905359 33141 scope.go:117] "RemoveContainer" containerID="368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef" Mar 08 03:36:30.906270 master-0 kubenswrapper[33141]: I0308 03:36:30.906234 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef"} err="failed to get container status \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": rpc error: code = NotFound desc = could not find container \"368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef\": container with ID starting with 368b8ba6f29a3d4eee5529b66bef9da59c3b79e9dda846465669b30ae0b1c3ef not found: ID does not exist" Mar 08 03:36:30.906270 master-0 kubenswrapper[33141]: I0308 03:36:30.906259 33141 scope.go:117] "RemoveContainer" containerID="f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319" Mar 08 03:36:30.906701 master-0 kubenswrapper[33141]: I0308 03:36:30.906662 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319"} err="failed to get container status \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": rpc error: code = NotFound desc = could not find container \"f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319\": container with ID starting with f4aba59a0dac77531c80c6a14f39f9750ecdb826fbbcf0547c734560189b6319 not found: ID does not exist" Mar 08 03:36:30.906701 master-0 kubenswrapper[33141]: I0308 03:36:30.906696 33141 scope.go:117] "RemoveContainer" containerID="93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e" Mar 08 03:36:30.907124 master-0 kubenswrapper[33141]: I0308 03:36:30.907068 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e"} err="failed to get container status \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": rpc error: code = NotFound desc = could not find container \"93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e\": container with ID starting with 93d525f5634313a3b094a60485f81885ea0fad7e7ead5c0208227c604d3c848e not found: ID does not exist" Mar 08 03:36:30.907124 master-0 kubenswrapper[33141]: I0308 03:36:30.907112 33141 scope.go:117] "RemoveContainer" containerID="1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530" Mar 08 03:36:30.907587 master-0 kubenswrapper[33141]: I0308 03:36:30.907549 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530"} err="failed to get container status \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": rpc error: code = NotFound desc = could not find container \"1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530\": container with ID starting with 1e3468b145175481bafa7c1f2e300eba2f2fe8985ff77c799fdf697ea24ae530 not found: ID does not exist" Mar 08 03:36:30.907587 master-0 kubenswrapper[33141]: I0308 03:36:30.907578 33141 scope.go:117] "RemoveContainer" containerID="a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422" Mar 08 03:36:30.908885 master-0 kubenswrapper[33141]: I0308 03:36:30.908048 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422"} err="failed to get container status \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": rpc error: code = NotFound desc = could not find container \"a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422\": container with ID starting with a5b213491f434eaf96969b81f553d91a137807d1aa05fbe10cf34450dd9f1422 not found: ID does not exist" Mar 08 03:36:31.313030 master-0 kubenswrapper[33141]: I0308 03:36:31.312864 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:36:31.318985 master-0 kubenswrapper[33141]: I0308 03:36:31.318871 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-f8578dbbb-gzqxh" Mar 08 03:36:32.143431 master-0 kubenswrapper[33141]: I0308 03:36:32.143376 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:36:32.332299 master-0 kubenswrapper[33141]: I0308 03:36:32.332208 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock\") pod \"89044116-4d25-4312-9475-c92acd031a98\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " Mar 08 03:36:32.333127 master-0 kubenswrapper[33141]: I0308 03:36:32.332339 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock" (OuterVolumeSpecName: "var-lock") pod "89044116-4d25-4312-9475-c92acd031a98" (UID: "89044116-4d25-4312-9475-c92acd031a98"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:32.333127 master-0 kubenswrapper[33141]: I0308 03:36:32.332368 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access\") pod \"89044116-4d25-4312-9475-c92acd031a98\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " Mar 08 03:36:32.333127 master-0 kubenswrapper[33141]: I0308 03:36:32.332604 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir\") pod \"89044116-4d25-4312-9475-c92acd031a98\" (UID: \"89044116-4d25-4312-9475-c92acd031a98\") " Mar 08 03:36:32.333127 master-0 kubenswrapper[33141]: I0308 03:36:32.332700 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "89044116-4d25-4312-9475-c92acd031a98" (UID: "89044116-4d25-4312-9475-c92acd031a98"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:32.333433 master-0 kubenswrapper[33141]: I0308 03:36:32.333304 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:32.333433 master-0 kubenswrapper[33141]: I0308 03:36:32.333338 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/89044116-4d25-4312-9475-c92acd031a98-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:32.336770 master-0 kubenswrapper[33141]: I0308 03:36:32.336709 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "89044116-4d25-4312-9475-c92acd031a98" (UID: "89044116-4d25-4312-9475-c92acd031a98"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:36:32.434878 master-0 kubenswrapper[33141]: I0308 03:36:32.434814 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/89044116-4d25-4312-9475-c92acd031a98-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:32.743111 master-0 kubenswrapper[33141]: I0308 03:36:32.742858 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"89044116-4d25-4312-9475-c92acd031a98","Type":"ContainerDied","Data":"27dc48b27fc15373c0f1525c8be7959ace65381f5bff90c8a7ee825b430a2ddb"} Mar 08 03:36:32.743111 master-0 kubenswrapper[33141]: I0308 03:36:32.743043 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dc48b27fc15373c0f1525c8be7959ace65381f5bff90c8a7ee825b430a2ddb" Mar 08 03:36:32.743111 master-0 kubenswrapper[33141]: I0308 03:36:32.742899 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 08 03:36:33.577703 master-0 kubenswrapper[33141]: I0308 03:36:33.577608 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:33.578176 master-0 kubenswrapper[33141]: I0308 03:36:33.577723 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:34.611154 master-0 kubenswrapper[33141]: I0308 03:36:34.611059 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:34.611154 master-0 kubenswrapper[33141]: I0308 03:36:34.611141 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:40.349499 master-0 kubenswrapper[33141]: I0308 03:36:40.349417 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:40.375719 master-0 kubenswrapper[33141]: I0308 03:36:40.375625 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ce589f76-c965-4a45-bc95-f6b17f90b4d0" Mar 08 03:36:40.375719 master-0 kubenswrapper[33141]: I0308 03:36:40.375692 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ce589f76-c965-4a45-bc95-f6b17f90b4d0" Mar 08 03:36:40.393179 master-0 kubenswrapper[33141]: I0308 03:36:40.393107 33141 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:40.415882 master-0 kubenswrapper[33141]: I0308 03:36:40.415820 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:40.423343 master-0 kubenswrapper[33141]: I0308 03:36:40.423287 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:36:40.434052 master-0 kubenswrapper[33141]: I0308 03:36:40.433015 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:36:40.442250 master-0 kubenswrapper[33141]: I0308 03:36:40.440489 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 08 03:36:40.838095 master-0 kubenswrapper[33141]: I0308 03:36:40.838023 33141 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="7a4faa74394b5b858caa119ab419fda702738ca958eb463482969f7f2811c488" exitCode=0 Mar 08 03:36:40.838095 master-0 kubenswrapper[33141]: I0308 03:36:40.838095 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"7a4faa74394b5b858caa119ab419fda702738ca958eb463482969f7f2811c488"} Mar 08 03:36:40.838388 master-0 kubenswrapper[33141]: I0308 03:36:40.838136 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"d2b4a6a646d109d45a66ee448fae5a9ee1687ca6158e2e9bd41d5c92a5a1f43c"} Mar 08 03:36:41.848746 master-0 kubenswrapper[33141]: I0308 03:36:41.848661 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"3d98673bffd313e7982b48d37ae51c3ea4bb7ee84df92f9e0863eb49e7c9702b"} Mar 08 03:36:41.848746 master-0 kubenswrapper[33141]: I0308 03:36:41.848736 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"18c73f4791e91b7aa490504e58c02b168bd82d63650318d2a7ab3e71d6efa17e"} Mar 08 03:36:41.849398 master-0 kubenswrapper[33141]: I0308 03:36:41.848763 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"ce43ede15f98b3e7c49ef1db3964b70a104ab159ab3e7b8ca6dacef4fef76f8f"} Mar 08 03:36:41.849398 master-0 kubenswrapper[33141]: I0308 03:36:41.848971 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:36:41.878210 master-0 kubenswrapper[33141]: I0308 03:36:41.878050 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=1.878031398 podStartE2EDuration="1.878031398s" podCreationTimestamp="2026-03-08 03:36:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:36:41.875808689 +0000 UTC m=+315.745701892" watchObservedRunningTime="2026-03-08 03:36:41.878031398 +0000 UTC m=+315.747924601" Mar 08 03:36:43.577502 master-0 kubenswrapper[33141]: I0308 03:36:43.577403 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:43.578244 master-0 kubenswrapper[33141]: I0308 03:36:43.577501 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:44.610520 master-0 kubenswrapper[33141]: I0308 03:36:44.610431 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:44.610520 master-0 kubenswrapper[33141]: I0308 03:36:44.610517 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:50.658481 master-0 kubenswrapper[33141]: I0308 03:36:50.658357 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:36:50.659446 master-0 kubenswrapper[33141]: E0308 03:36:50.659086 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89044116-4d25-4312-9475-c92acd031a98" containerName="installer" Mar 08 03:36:50.659446 master-0 kubenswrapper[33141]: I0308 03:36:50.659119 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="89044116-4d25-4312-9475-c92acd031a98" containerName="installer" Mar 08 03:36:50.659446 master-0 kubenswrapper[33141]: I0308 03:36:50.659393 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="89044116-4d25-4312-9475-c92acd031a98" containerName="installer" Mar 08 03:36:50.660195 master-0 kubenswrapper[33141]: I0308 03:36:50.660145 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:36:50.660561 master-0 kubenswrapper[33141]: I0308 03:36:50.660467 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.660844 master-0 kubenswrapper[33141]: I0308 03:36:50.660784 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver" containerID="cri-o://f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97" gracePeriod=15 Mar 08 03:36:50.660998 master-0 kubenswrapper[33141]: I0308 03:36:50.660851 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795" gracePeriod=15 Mar 08 03:36:50.661113 master-0 kubenswrapper[33141]: I0308 03:36:50.661010 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad" gracePeriod=15 Mar 08 03:36:50.661226 master-0 kubenswrapper[33141]: I0308 03:36:50.661021 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9" gracePeriod=15 Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.661896 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.660993 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2" gracePeriod=15 Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662293 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-check-endpoints" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662316 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-check-endpoints" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662338 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="setup" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662355 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="setup" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662377 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662390 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662548 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-syncer" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662608 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-syncer" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662649 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662664 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: E0308 03:36:50.662687 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-insecure-readyz" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.662700 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-insecure-readyz" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.663074 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-check-endpoints" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.663104 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.663145 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-insecure-readyz" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.663182 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 03:36:50.665146 master-0 kubenswrapper[33141]: I0308 03:36:50.663204 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4251d3504cdc0ec85144c1379056c" containerName="kube-apiserver-cert-syncer" Mar 08 03:36:50.783191 master-0 kubenswrapper[33141]: I0308 03:36:50.783110 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.783373 master-0 kubenswrapper[33141]: I0308 03:36:50.783216 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.783373 master-0 kubenswrapper[33141]: I0308 03:36:50.783278 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.783465 master-0 kubenswrapper[33141]: I0308 03:36:50.783394 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.783514 master-0 kubenswrapper[33141]: I0308 03:36:50.783470 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.783693 master-0 kubenswrapper[33141]: I0308 03:36:50.783656 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.783889 master-0 kubenswrapper[33141]: I0308 03:36:50.783801 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.784072 master-0 kubenswrapper[33141]: I0308 03:36:50.783956 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.870116 master-0 kubenswrapper[33141]: E0308 03:36:50.870024 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.886886 master-0 kubenswrapper[33141]: I0308 03:36:50.886782 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887054 master-0 kubenswrapper[33141]: I0308 03:36:50.886935 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887054 master-0 kubenswrapper[33141]: I0308 03:36:50.886957 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887054 master-0 kubenswrapper[33141]: I0308 03:36:50.887028 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887263 master-0 kubenswrapper[33141]: I0308 03:36:50.887082 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887263 master-0 kubenswrapper[33141]: I0308 03:36:50.887124 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887263 master-0 kubenswrapper[33141]: I0308 03:36:50.887097 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887263 master-0 kubenswrapper[33141]: I0308 03:36:50.887214 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887263 master-0 kubenswrapper[33141]: I0308 03:36:50.887226 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887566 master-0 kubenswrapper[33141]: I0308 03:36:50.887256 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5dbd3d3755bd0f9e4667c2fcf3fcf07d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"5dbd3d3755bd0f9e4667c2fcf3fcf07d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:50.887566 master-0 kubenswrapper[33141]: I0308 03:36:50.887481 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887566 master-0 kubenswrapper[33141]: I0308 03:36:50.887388 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887855 master-0 kubenswrapper[33141]: I0308 03:36:50.887683 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.887961 master-0 kubenswrapper[33141]: I0308 03:36:50.887787 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.888136 master-0 kubenswrapper[33141]: I0308 03:36:50.888033 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.888221 master-0 kubenswrapper[33141]: I0308 03:36:50.888148 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:50.940144 master-0 kubenswrapper[33141]: I0308 03:36:50.940015 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-cert-syncer/0.log" Mar 08 03:36:50.941157 master-0 kubenswrapper[33141]: I0308 03:36:50.941101 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9" exitCode=0 Mar 08 03:36:50.941157 master-0 kubenswrapper[33141]: I0308 03:36:50.941146 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795" exitCode=0 Mar 08 03:36:50.941329 master-0 kubenswrapper[33141]: I0308 03:36:50.941169 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2" exitCode=0 Mar 08 03:36:50.941329 master-0 kubenswrapper[33141]: I0308 03:36:50.941185 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad" exitCode=2 Mar 08 03:36:50.943862 master-0 kubenswrapper[33141]: I0308 03:36:50.943792 33141 generic.go:334] "Generic (PLEG): container finished" podID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" containerID="6e5c4d7b7c2f3383367ed91c12e476d8cf762501448166e101db74c453828781" exitCode=0 Mar 08 03:36:50.943990 master-0 kubenswrapper[33141]: I0308 03:36:50.943876 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f","Type":"ContainerDied","Data":"6e5c4d7b7c2f3383367ed91c12e476d8cf762501448166e101db74c453828781"} Mar 08 03:36:50.945432 master-0 kubenswrapper[33141]: I0308 03:36:50.945355 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:50.946298 master-0 kubenswrapper[33141]: I0308 03:36:50.946232 33141 status_manager.go:851] "Failed to get status for pod" podUID="36d4251d3504cdc0ec85144c1379056c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:51.172314 master-0 kubenswrapper[33141]: I0308 03:36:51.172176 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:51.210408 master-0 kubenswrapper[33141]: W0308 03:36:51.210354 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb275ed7e9ce09d69a66613ca3ae3d89e.slice/crio-4fe917016f83ee0f34d0560b328bebb08df0204c287a279354653a63c52d2479 WatchSource:0}: Error finding container 4fe917016f83ee0f34d0560b328bebb08df0204c287a279354653a63c52d2479: Status 404 returned error can't find the container with id 4fe917016f83ee0f34d0560b328bebb08df0204c287a279354653a63c52d2479 Mar 08 03:36:51.215140 master-0 kubenswrapper[33141]: E0308 03:36:51.214944 33141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189ac074cd1c5f64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,LastTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:36:51.637055 master-0 kubenswrapper[33141]: E0308 03:36:51.636800 33141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189ac074cd1c5f64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,LastTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:36:51.956204 master-0 kubenswrapper[33141]: I0308 03:36:51.956099 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af"} Mar 08 03:36:51.956204 master-0 kubenswrapper[33141]: I0308 03:36:51.956166 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"b275ed7e9ce09d69a66613ca3ae3d89e","Type":"ContainerStarted","Data":"4fe917016f83ee0f34d0560b328bebb08df0204c287a279354653a63c52d2479"} Mar 08 03:36:51.958114 master-0 kubenswrapper[33141]: E0308 03:36:51.958036 33141 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:36:51.958700 master-0 kubenswrapper[33141]: I0308 03:36:51.958635 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:51.959729 master-0 kubenswrapper[33141]: I0308 03:36:51.959631 33141 status_manager.go:851] "Failed to get status for pod" podUID="36d4251d3504cdc0ec85144c1379056c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.425445 master-0 kubenswrapper[33141]: I0308 03:36:52.425376 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:52.426828 master-0 kubenswrapper[33141]: I0308 03:36:52.426761 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.514921 master-0 kubenswrapper[33141]: I0308 03:36:52.514827 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir\") pod \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " Mar 08 03:36:52.515233 master-0 kubenswrapper[33141]: I0308 03:36:52.514900 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock\") pod \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " Mar 08 03:36:52.515233 master-0 kubenswrapper[33141]: I0308 03:36:52.514994 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" (UID: "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:52.515233 master-0 kubenswrapper[33141]: I0308 03:36:52.515049 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access\") pod \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\" (UID: \"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f\") " Mar 08 03:36:52.515233 master-0 kubenswrapper[33141]: I0308 03:36:52.515043 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" (UID: "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:52.515739 master-0 kubenswrapper[33141]: I0308 03:36:52.515682 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:52.515739 master-0 kubenswrapper[33141]: I0308 03:36:52.515734 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:52.519947 master-0 kubenswrapper[33141]: I0308 03:36:52.519846 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" (UID: "fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:36:52.617290 master-0 kubenswrapper[33141]: I0308 03:36:52.617120 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:52.926993 master-0 kubenswrapper[33141]: I0308 03:36:52.926856 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:36:52.954758 master-0 kubenswrapper[33141]: I0308 03:36:52.954708 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:36:52.955499 master-0 kubenswrapper[33141]: I0308 03:36:52.955454 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.955846 master-0 kubenswrapper[33141]: I0308 03:36:52.955812 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.963342 master-0 kubenswrapper[33141]: I0308 03:36:52.963275 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f","Type":"ContainerDied","Data":"4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4"} Mar 08 03:36:52.963342 master-0 kubenswrapper[33141]: I0308 03:36:52.963336 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e0ccceee709a80837fe4933f763d049143908d117c8729750eb1a5ab11d96f4" Mar 08 03:36:52.963780 master-0 kubenswrapper[33141]: I0308 03:36:52.963311 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 08 03:36:52.984603 master-0 kubenswrapper[33141]: I0308 03:36:52.984542 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.985156 master-0 kubenswrapper[33141]: I0308 03:36:52.985105 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.990770 master-0 kubenswrapper[33141]: I0308 03:36:52.990721 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 08 03:36:52.991411 master-0 kubenswrapper[33141]: I0308 03:36:52.991359 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:52.991865 master-0 kubenswrapper[33141]: I0308 03:36:52.991819 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.145011 master-0 kubenswrapper[33141]: I0308 03:36:53.144877 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-cert-syncer/0.log" Mar 08 03:36:53.145852 master-0 kubenswrapper[33141]: I0308 03:36:53.145804 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:53.147242 master-0 kubenswrapper[33141]: I0308 03:36:53.147173 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.148259 master-0 kubenswrapper[33141]: I0308 03:36:53.148197 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.149143 master-0 kubenswrapper[33141]: I0308 03:36:53.149062 33141 status_manager.go:851] "Failed to get status for pod" podUID="36d4251d3504cdc0ec85144c1379056c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.332444 master-0 kubenswrapper[33141]: I0308 03:36:53.332342 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"36d4251d3504cdc0ec85144c1379056c\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " Mar 08 03:36:53.332748 master-0 kubenswrapper[33141]: I0308 03:36:53.332458 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"36d4251d3504cdc0ec85144c1379056c\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " Mar 08 03:36:53.332748 master-0 kubenswrapper[33141]: I0308 03:36:53.332451 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "36d4251d3504cdc0ec85144c1379056c" (UID: "36d4251d3504cdc0ec85144c1379056c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:53.332748 master-0 kubenswrapper[33141]: I0308 03:36:53.332522 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"36d4251d3504cdc0ec85144c1379056c\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " Mar 08 03:36:53.332748 master-0 kubenswrapper[33141]: I0308 03:36:53.332596 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "36d4251d3504cdc0ec85144c1379056c" (UID: "36d4251d3504cdc0ec85144c1379056c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:53.332748 master-0 kubenswrapper[33141]: I0308 03:36:53.332556 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "36d4251d3504cdc0ec85144c1379056c" (UID: "36d4251d3504cdc0ec85144c1379056c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:36:53.333214 master-0 kubenswrapper[33141]: I0308 03:36:53.333180 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:53.333214 master-0 kubenswrapper[33141]: I0308 03:36:53.333211 33141 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:53.333390 master-0 kubenswrapper[33141]: I0308 03:36:53.333231 33141 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:36:53.577843 master-0 kubenswrapper[33141]: I0308 03:36:53.577760 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:36:53.578153 master-0 kubenswrapper[33141]: I0308 03:36:53.577857 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:36:53.972824 master-0 kubenswrapper[33141]: I0308 03:36:53.972784 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-cert-syncer/0.log" Mar 08 03:36:53.974019 master-0 kubenswrapper[33141]: I0308 03:36:53.973994 33141 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97" exitCode=0 Mar 08 03:36:53.974118 master-0 kubenswrapper[33141]: I0308 03:36:53.974091 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:36:53.974151 master-0 kubenswrapper[33141]: I0308 03:36:53.974091 33141 scope.go:117] "RemoveContainer" containerID="baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9" Mar 08 03:36:53.991285 master-0 kubenswrapper[33141]: I0308 03:36:53.991215 33141 scope.go:117] "RemoveContainer" containerID="e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795" Mar 08 03:36:53.994049 master-0 kubenswrapper[33141]: I0308 03:36:53.994005 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.994778 master-0 kubenswrapper[33141]: I0308 03:36:53.994746 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:53.995005 master-0 kubenswrapper[33141]: I0308 03:36:53.994981 33141 status_manager.go:851] "Failed to get status for pod" podUID="36d4251d3504cdc0ec85144c1379056c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.009150 master-0 kubenswrapper[33141]: I0308 03:36:54.008883 33141 scope.go:117] "RemoveContainer" containerID="ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2" Mar 08 03:36:54.032046 master-0 kubenswrapper[33141]: I0308 03:36:54.031987 33141 scope.go:117] "RemoveContainer" containerID="3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad" Mar 08 03:36:54.050298 master-0 kubenswrapper[33141]: I0308 03:36:54.050238 33141 scope.go:117] "RemoveContainer" containerID="f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97" Mar 08 03:36:54.075324 master-0 kubenswrapper[33141]: I0308 03:36:54.075280 33141 scope.go:117] "RemoveContainer" containerID="4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456" Mar 08 03:36:54.094696 master-0 kubenswrapper[33141]: I0308 03:36:54.094513 33141 scope.go:117] "RemoveContainer" containerID="baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9" Mar 08 03:36:54.095406 master-0 kubenswrapper[33141]: E0308 03:36:54.095377 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9\": container with ID starting with baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9 not found: ID does not exist" containerID="baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9" Mar 08 03:36:54.095505 master-0 kubenswrapper[33141]: I0308 03:36:54.095411 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9"} err="failed to get container status \"baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9\": rpc error: code = NotFound desc = could not find container \"baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9\": container with ID starting with baeb40cdfb5c81711bbc2f3db26c5928f82fe6a4944dec127f7b3d45c111c0f9 not found: ID does not exist" Mar 08 03:36:54.095505 master-0 kubenswrapper[33141]: I0308 03:36:54.095467 33141 scope.go:117] "RemoveContainer" containerID="e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795" Mar 08 03:36:54.095870 master-0 kubenswrapper[33141]: E0308 03:36:54.095823 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795\": container with ID starting with e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795 not found: ID does not exist" containerID="e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795" Mar 08 03:36:54.095997 master-0 kubenswrapper[33141]: I0308 03:36:54.095879 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795"} err="failed to get container status \"e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795\": rpc error: code = NotFound desc = could not find container \"e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795\": container with ID starting with e13c6667dcb854c1de369f759c38c0b4f9f63e9afd8fb3ae46c3c1e8f1856795 not found: ID does not exist" Mar 08 03:36:54.095997 master-0 kubenswrapper[33141]: I0308 03:36:54.095926 33141 scope.go:117] "RemoveContainer" containerID="ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2" Mar 08 03:36:54.096253 master-0 kubenswrapper[33141]: E0308 03:36:54.096216 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2\": container with ID starting with ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2 not found: ID does not exist" containerID="ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2" Mar 08 03:36:54.096344 master-0 kubenswrapper[33141]: I0308 03:36:54.096268 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2"} err="failed to get container status \"ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2\": rpc error: code = NotFound desc = could not find container \"ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2\": container with ID starting with ac18e9b8046d5f0859d7f45040b24645c77505a3b2be135a6657603468a587a2 not found: ID does not exist" Mar 08 03:36:54.096344 master-0 kubenswrapper[33141]: I0308 03:36:54.096283 33141 scope.go:117] "RemoveContainer" containerID="3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad" Mar 08 03:36:54.096690 master-0 kubenswrapper[33141]: E0308 03:36:54.096644 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad\": container with ID starting with 3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad not found: ID does not exist" containerID="3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad" Mar 08 03:36:54.096869 master-0 kubenswrapper[33141]: I0308 03:36:54.096823 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad"} err="failed to get container status \"3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad\": rpc error: code = NotFound desc = could not find container \"3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad\": container with ID starting with 3df8b7dbd5d3786c6e8f4a937c38b68d847caf7fe85004c995f3d4e1019eabad not found: ID does not exist" Mar 08 03:36:54.097013 master-0 kubenswrapper[33141]: I0308 03:36:54.096992 33141 scope.go:117] "RemoveContainer" containerID="f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97" Mar 08 03:36:54.097623 master-0 kubenswrapper[33141]: E0308 03:36:54.097589 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97\": container with ID starting with f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97 not found: ID does not exist" containerID="f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97" Mar 08 03:36:54.097742 master-0 kubenswrapper[33141]: I0308 03:36:54.097627 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97"} err="failed to get container status \"f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97\": rpc error: code = NotFound desc = could not find container \"f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97\": container with ID starting with f649f54e219e06b8748da4da2fa27f79cb801d6e2f92b8bb2c0f27802b663a97 not found: ID does not exist" Mar 08 03:36:54.097742 master-0 kubenswrapper[33141]: I0308 03:36:54.097647 33141 scope.go:117] "RemoveContainer" containerID="4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456" Mar 08 03:36:54.098158 master-0 kubenswrapper[33141]: E0308 03:36:54.098123 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456\": container with ID starting with 4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456 not found: ID does not exist" containerID="4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456" Mar 08 03:36:54.098307 master-0 kubenswrapper[33141]: I0308 03:36:54.098274 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456"} err="failed to get container status \"4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456\": rpc error: code = NotFound desc = could not find container \"4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456\": container with ID starting with 4ede5a1be502bf8139c1b63d19c775b7dea1844203156a873af283f6f8c0d456 not found: ID does not exist" Mar 08 03:36:54.362440 master-0 kubenswrapper[33141]: I0308 03:36:54.362371 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d4251d3504cdc0ec85144c1379056c" path="/var/lib/kubelet/pods/36d4251d3504cdc0ec85144c1379056c/volumes" Mar 08 03:36:54.610747 master-0 kubenswrapper[33141]: I0308 03:36:54.610695 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:36:54.611018 master-0 kubenswrapper[33141]: I0308 03:36:54.610774 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:36:54.891534 master-0 kubenswrapper[33141]: E0308 03:36:54.891408 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.892347 master-0 kubenswrapper[33141]: E0308 03:36:54.892271 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.893421 master-0 kubenswrapper[33141]: E0308 03:36:54.893331 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.894320 master-0 kubenswrapper[33141]: E0308 03:36:54.894236 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.895122 master-0 kubenswrapper[33141]: E0308 03:36:54.895045 33141 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:54.895122 master-0 kubenswrapper[33141]: I0308 03:36:54.895111 33141 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 03:36:54.895939 master-0 kubenswrapper[33141]: E0308 03:36:54.895829 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 08 03:36:55.097277 master-0 kubenswrapper[33141]: E0308 03:36:55.097172 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 08 03:36:55.498860 master-0 kubenswrapper[33141]: E0308 03:36:55.498784 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 08 03:36:56.300632 master-0 kubenswrapper[33141]: E0308 03:36:56.299994 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 08 03:36:56.355424 master-0 kubenswrapper[33141]: I0308 03:36:56.355326 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:56.356313 master-0 kubenswrapper[33141]: I0308 03:36:56.356219 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:36:57.902580 master-0 kubenswrapper[33141]: E0308 03:36:57.902464 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 08 03:37:01.104752 master-0 kubenswrapper[33141]: E0308 03:37:01.104215 33141 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 08 03:37:01.638055 master-0 kubenswrapper[33141]: E0308 03:37:01.637867 33141 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189ac074cd1c5f64 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:b275ed7e9ce09d69a66613ca3ae3d89e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,LastTimestamp:2026-03-08 03:36:51.213639524 +0000 UTC m=+325.083532717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 08 03:37:03.577830 master-0 kubenswrapper[33141]: I0308 03:37:03.577535 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:37:03.578690 master-0 kubenswrapper[33141]: I0308 03:37:03.577860 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:37:04.088092 master-0 kubenswrapper[33141]: I0308 03:37:04.088012 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/1.log" Mar 08 03:37:04.091325 master-0 kubenswrapper[33141]: I0308 03:37:04.091255 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/0.log" Mar 08 03:37:04.091509 master-0 kubenswrapper[33141]: I0308 03:37:04.091352 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" exitCode=1 Mar 08 03:37:04.091509 master-0 kubenswrapper[33141]: I0308 03:37:04.091398 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerDied","Data":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} Mar 08 03:37:04.091509 master-0 kubenswrapper[33141]: I0308 03:37:04.091451 33141 scope.go:117] "RemoveContainer" containerID="efbf585c23fc1e979a8521b267e8220f735c3268158b1f137e28d2cce1acecfb" Mar 08 03:37:04.092808 master-0 kubenswrapper[33141]: I0308 03:37:04.092740 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:37:04.093488 master-0 kubenswrapper[33141]: I0308 03:37:04.093397 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:04.093723 master-0 kubenswrapper[33141]: E0308 03:37:04.093501 33141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(d80fb58c61b036bc2179d84399404132)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:37:04.095345 master-0 kubenswrapper[33141]: I0308 03:37:04.094564 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:04.096767 master-0 kubenswrapper[33141]: I0308 03:37:04.096714 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:04.610453 master-0 kubenswrapper[33141]: I0308 03:37:04.610354 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:37:04.611340 master-0 kubenswrapper[33141]: I0308 03:37:04.610461 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:37:05.101846 master-0 kubenswrapper[33141]: I0308 03:37:05.101745 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/1.log" Mar 08 03:37:05.350280 master-0 kubenswrapper[33141]: I0308 03:37:05.350143 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:05.351617 master-0 kubenswrapper[33141]: I0308 03:37:05.351562 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:05.352555 master-0 kubenswrapper[33141]: I0308 03:37:05.352389 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:05.353738 master-0 kubenswrapper[33141]: I0308 03:37:05.353664 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:05.385785 master-0 kubenswrapper[33141]: I0308 03:37:05.385738 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:05.386043 master-0 kubenswrapper[33141]: I0308 03:37:05.386016 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:05.387499 master-0 kubenswrapper[33141]: E0308 03:37:05.387421 33141 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:05.388265 master-0 kubenswrapper[33141]: I0308 03:37:05.388204 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:05.427939 master-0 kubenswrapper[33141]: W0308 03:37:05.426300 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dbd3d3755bd0f9e4667c2fcf3fcf07d.slice/crio-7bc6619648eb9ded721f133c0771cd5a81eee809b9aa7b85b1232a1e0370c49f WatchSource:0}: Error finding container 7bc6619648eb9ded721f133c0771cd5a81eee809b9aa7b85b1232a1e0370c49f: Status 404 returned error can't find the container with id 7bc6619648eb9ded721f133c0771cd5a81eee809b9aa7b85b1232a1e0370c49f Mar 08 03:37:06.117887 master-0 kubenswrapper[33141]: I0308 03:37:06.117778 33141 generic.go:334] "Generic (PLEG): container finished" podID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" containerID="c998ed75a17010cbd1bac4192aee0392d8597b1d62e6e1df64be262c69c50ca1" exitCode=0 Mar 08 03:37:06.117887 master-0 kubenswrapper[33141]: I0308 03:37:06.117860 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerDied","Data":"c998ed75a17010cbd1bac4192aee0392d8597b1d62e6e1df64be262c69c50ca1"} Mar 08 03:37:06.119131 master-0 kubenswrapper[33141]: I0308 03:37:06.117946 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"7bc6619648eb9ded721f133c0771cd5a81eee809b9aa7b85b1232a1e0370c49f"} Mar 08 03:37:06.119131 master-0 kubenswrapper[33141]: I0308 03:37:06.118650 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:06.119131 master-0 kubenswrapper[33141]: I0308 03:37:06.118689 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:06.120138 master-0 kubenswrapper[33141]: I0308 03:37:06.120066 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.120301 master-0 kubenswrapper[33141]: E0308 03:37:06.120072 33141 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:06.121185 master-0 kubenswrapper[33141]: I0308 03:37:06.121069 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.122092 master-0 kubenswrapper[33141]: I0308 03:37:06.122017 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.365678 master-0 kubenswrapper[33141]: I0308 03:37:06.365551 33141 status_manager.go:851] "Failed to get status for pod" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.366947 master-0 kubenswrapper[33141]: I0308 03:37:06.366833 33141 status_manager.go:851] "Failed to get status for pod" podUID="d80fb58c61b036bc2179d84399404132" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.367884 master-0 kubenswrapper[33141]: I0308 03:37:06.367808 33141 status_manager.go:851] "Failed to get status for pod" podUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:06.369696 master-0 kubenswrapper[33141]: I0308 03:37:06.369569 33141 status_manager.go:851] "Failed to get status for pod" podUID="100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 08 03:37:07.140529 master-0 kubenswrapper[33141]: I0308 03:37:07.138636 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"719e3e9971ae2c351e96bff7bc76e7d13440b92414eb7c9ed7108921ec891ff6"} Mar 08 03:37:07.140529 master-0 kubenswrapper[33141]: I0308 03:37:07.138709 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"23dcd2dfb1f2df42b4ddfc8b252b53b0745fb6b0311815870dcabbe57b881963"} Mar 08 03:37:07.430965 master-0 kubenswrapper[33141]: I0308 03:37:07.430929 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:07.431101 master-0 kubenswrapper[33141]: I0308 03:37:07.431089 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:07.431161 master-0 kubenswrapper[33141]: I0308 03:37:07.431152 33141 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:07.431802 master-0 kubenswrapper[33141]: I0308 03:37:07.431785 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:37:07.432351 master-0 kubenswrapper[33141]: E0308 03:37:07.432327 33141 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(d80fb58c61b036bc2179d84399404132)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" Mar 08 03:37:08.145965 master-0 kubenswrapper[33141]: I0308 03:37:08.145915 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"9df2375f8f8e9e3dc5df529f564818cd716e00b11af712818a7411de970e2388"} Mar 08 03:37:08.145965 master-0 kubenswrapper[33141]: I0308 03:37:08.145968 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"0cb880d61eafb8033b4c1b94dad2d1423cddd349c4c3d56f165c070a12b0f837"} Mar 08 03:37:08.146469 master-0 kubenswrapper[33141]: I0308 03:37:08.145978 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"5dbd3d3755bd0f9e4667c2fcf3fcf07d","Type":"ContainerStarted","Data":"b23bbd66d5c94d5c4c6578f4dbd5492d9e91e337dcbf3211a4adc3e508ccc6e3"} Mar 08 03:37:08.146469 master-0 kubenswrapper[33141]: I0308 03:37:08.146198 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:08.146469 master-0 kubenswrapper[33141]: I0308 03:37:08.146211 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:08.146469 master-0 kubenswrapper[33141]: I0308 03:37:08.146371 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:10.388795 master-0 kubenswrapper[33141]: I0308 03:37:10.388375 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:10.388795 master-0 kubenswrapper[33141]: I0308 03:37:10.388574 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:10.394693 master-0 kubenswrapper[33141]: I0308 03:37:10.394635 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:13.170612 master-0 kubenswrapper[33141]: I0308 03:37:13.170568 33141 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:13.200930 master-0 kubenswrapper[33141]: I0308 03:37:13.200841 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="56dbedfd-3ddf-4923-877f-7e53305f60b5" Mar 08 03:37:13.577291 master-0 kubenswrapper[33141]: I0308 03:37:13.577225 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:37:13.577535 master-0 kubenswrapper[33141]: I0308 03:37:13.577299 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:37:14.201712 master-0 kubenswrapper[33141]: I0308 03:37:14.201652 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:14.201712 master-0 kubenswrapper[33141]: I0308 03:37:14.201694 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:14.208459 master-0 kubenswrapper[33141]: I0308 03:37:14.208404 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:14.610692 master-0 kubenswrapper[33141]: I0308 03:37:14.610593 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:37:14.610692 master-0 kubenswrapper[33141]: I0308 03:37:14.610677 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:37:15.212036 master-0 kubenswrapper[33141]: I0308 03:37:15.211948 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:15.212036 master-0 kubenswrapper[33141]: I0308 03:37:15.212010 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:16.386891 master-0 kubenswrapper[33141]: I0308 03:37:16.386801 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="5dbd3d3755bd0f9e4667c2fcf3fcf07d" podUID="56dbedfd-3ddf-4923-877f-7e53305f60b5" Mar 08 03:37:19.350963 master-0 kubenswrapper[33141]: I0308 03:37:19.350843 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:37:20.269550 master-0 kubenswrapper[33141]: I0308 03:37:20.269472 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/1.log" Mar 08 03:37:20.271139 master-0 kubenswrapper[33141]: I0308 03:37:20.271064 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"d80fb58c61b036bc2179d84399404132","Type":"ContainerStarted","Data":"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab"} Mar 08 03:37:22.584194 master-0 kubenswrapper[33141]: I0308 03:37:22.584121 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:37:22.896988 master-0 kubenswrapper[33141]: I0308 03:37:22.896718 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 08 03:37:22.897295 master-0 kubenswrapper[33141]: I0308 03:37:22.897110 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 08 03:37:23.392607 master-0 kubenswrapper[33141]: I0308 03:37:23.392503 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 03:37:23.398191 master-0 kubenswrapper[33141]: I0308 03:37:23.398121 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 03:37:23.410410 master-0 kubenswrapper[33141]: I0308 03:37:23.410352 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 03:37:23.576747 master-0 kubenswrapper[33141]: I0308 03:37:23.576668 33141 patch_prober.go:28] interesting pod/console-6fbfcd994f-49ft7 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" start-of-body= Mar 08 03:37:23.577130 master-0 kubenswrapper[33141]: I0308 03:37:23.576763 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.98:8443/health\": dial tcp 10.128.0.98:8443: connect: connection refused" Mar 08 03:37:23.666293 master-0 kubenswrapper[33141]: I0308 03:37:23.666131 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 03:37:23.677720 master-0 kubenswrapper[33141]: I0308 03:37:23.677659 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 03:37:23.739484 master-0 kubenswrapper[33141]: I0308 03:37:23.739389 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 03:37:23.901593 master-0 kubenswrapper[33141]: I0308 03:37:23.901510 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:37:24.028284 master-0 kubenswrapper[33141]: I0308 03:37:24.028196 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 08 03:37:24.297139 master-0 kubenswrapper[33141]: I0308 03:37:24.296885 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 08 03:37:24.441822 master-0 kubenswrapper[33141]: I0308 03:37:24.441738 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 03:37:24.473604 master-0 kubenswrapper[33141]: I0308 03:37:24.473532 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 03:37:24.610813 master-0 kubenswrapper[33141]: I0308 03:37:24.610627 33141 patch_prober.go:28] interesting pod/console-748f76c866-99l2l container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" start-of-body= Mar 08 03:37:24.610813 master-0 kubenswrapper[33141]: I0308 03:37:24.610710 33141 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" probeResult="failure" output="Get \"https://10.128.0.97:8443/health\": dial tcp 10.128.0.97:8443: connect: connection refused" Mar 08 03:37:24.682134 master-0 kubenswrapper[33141]: I0308 03:37:24.682061 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 03:37:24.738532 master-0 kubenswrapper[33141]: I0308 03:37:24.738418 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 08 03:37:25.645592 master-0 kubenswrapper[33141]: I0308 03:37:25.645477 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rgflg" Mar 08 03:37:25.704508 master-0 kubenswrapper[33141]: I0308 03:37:25.704375 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 03:37:25.724093 master-0 kubenswrapper[33141]: I0308 03:37:25.721803 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 03:37:26.068914 master-0 kubenswrapper[33141]: I0308 03:37:26.068832 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 03:37:26.086878 master-0 kubenswrapper[33141]: I0308 03:37:26.086779 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-sp7gt" Mar 08 03:37:26.102430 master-0 kubenswrapper[33141]: I0308 03:37:26.102331 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-fm6df" Mar 08 03:37:26.105377 master-0 kubenswrapper[33141]: I0308 03:37:26.105342 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 03:37:26.107886 master-0 kubenswrapper[33141]: I0308 03:37:26.107813 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 03:37:26.181358 master-0 kubenswrapper[33141]: I0308 03:37:26.181271 33141 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 03:37:26.250798 master-0 kubenswrapper[33141]: I0308 03:37:26.250726 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 03:37:26.263693 master-0 kubenswrapper[33141]: I0308 03:37:26.263600 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 08 03:37:26.273306 master-0 kubenswrapper[33141]: I0308 03:37:26.273236 33141 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 03:37:26.307078 master-0 kubenswrapper[33141]: I0308 03:37:26.306185 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 03:37:26.326323 master-0 kubenswrapper[33141]: I0308 03:37:26.326155 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-s25xz" Mar 08 03:37:26.452932 master-0 kubenswrapper[33141]: I0308 03:37:26.452182 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 03:37:26.488016 master-0 kubenswrapper[33141]: I0308 03:37:26.487952 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 03:37:26.520202 master-0 kubenswrapper[33141]: I0308 03:37:26.520140 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 08 03:37:26.531646 master-0 kubenswrapper[33141]: I0308 03:37:26.531609 33141 scope.go:117] "RemoveContainer" containerID="22f31e2b7f0321897dacca58338ef528e1d06507bc628197034c61c7576b258f" Mar 08 03:37:26.548449 master-0 kubenswrapper[33141]: I0308 03:37:26.548404 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 03:37:26.664773 master-0 kubenswrapper[33141]: I0308 03:37:26.664731 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 08 03:37:26.666236 master-0 kubenswrapper[33141]: I0308 03:37:26.666184 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 03:37:26.688449 master-0 kubenswrapper[33141]: I0308 03:37:26.688069 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 03:37:26.751724 master-0 kubenswrapper[33141]: I0308 03:37:26.751677 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 03:37:26.817773 master-0 kubenswrapper[33141]: I0308 03:37:26.817699 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 08 03:37:26.841105 master-0 kubenswrapper[33141]: I0308 03:37:26.840989 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 03:37:26.843944 master-0 kubenswrapper[33141]: I0308 03:37:26.843884 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 08 03:37:26.905838 master-0 kubenswrapper[33141]: I0308 03:37:26.905731 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 03:37:26.935954 master-0 kubenswrapper[33141]: I0308 03:37:26.933662 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 03:37:26.989526 master-0 kubenswrapper[33141]: I0308 03:37:26.989449 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 03:37:27.033610 master-0 kubenswrapper[33141]: I0308 03:37:27.033534 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 03:37:27.170778 master-0 kubenswrapper[33141]: I0308 03:37:27.170657 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 08 03:37:27.178193 master-0 kubenswrapper[33141]: I0308 03:37:27.178115 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 03:37:27.180637 master-0 kubenswrapper[33141]: I0308 03:37:27.180589 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 03:37:27.185005 master-0 kubenswrapper[33141]: I0308 03:37:27.184963 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 03:37:27.204752 master-0 kubenswrapper[33141]: I0308 03:37:27.204667 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 03:37:27.249353 master-0 kubenswrapper[33141]: I0308 03:37:27.249279 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 03:37:27.304233 master-0 kubenswrapper[33141]: I0308 03:37:27.304180 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 03:37:27.317600 master-0 kubenswrapper[33141]: I0308 03:37:27.317538 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 08 03:37:27.334038 master-0 kubenswrapper[33141]: I0308 03:37:27.333798 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 03:37:27.432148 master-0 kubenswrapper[33141]: I0308 03:37:27.432017 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:27.432148 master-0 kubenswrapper[33141]: I0308 03:37:27.432092 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:27.440063 master-0 kubenswrapper[33141]: I0308 03:37:27.440001 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:27.517149 master-0 kubenswrapper[33141]: I0308 03:37:27.517063 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 03:37:27.571520 master-0 kubenswrapper[33141]: I0308 03:37:27.571424 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 03:37:27.649590 master-0 kubenswrapper[33141]: I0308 03:37:27.649522 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 03:37:27.658265 master-0 kubenswrapper[33141]: I0308 03:37:27.658040 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 03:37:27.672032 master-0 kubenswrapper[33141]: I0308 03:37:27.671692 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 03:37:27.674951 master-0 kubenswrapper[33141]: I0308 03:37:27.674922 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 08 03:37:27.679659 master-0 kubenswrapper[33141]: I0308 03:37:27.679596 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 08 03:37:27.702302 master-0 kubenswrapper[33141]: I0308 03:37:27.702117 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 03:37:27.737187 master-0 kubenswrapper[33141]: I0308 03:37:27.737099 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 03:37:27.759891 master-0 kubenswrapper[33141]: I0308 03:37:27.759812 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 03:37:27.803689 master-0 kubenswrapper[33141]: I0308 03:37:27.803596 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 08 03:37:27.860240 master-0 kubenswrapper[33141]: I0308 03:37:27.860144 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 03:37:27.885685 master-0 kubenswrapper[33141]: I0308 03:37:27.885631 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 03:37:28.019342 master-0 kubenswrapper[33141]: I0308 03:37:28.019280 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 08 03:37:28.074116 master-0 kubenswrapper[33141]: I0308 03:37:28.073857 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 03:37:28.075455 master-0 kubenswrapper[33141]: I0308 03:37:28.075382 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jzkrb" Mar 08 03:37:28.109269 master-0 kubenswrapper[33141]: I0308 03:37:28.109181 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 03:37:28.128871 master-0 kubenswrapper[33141]: I0308 03:37:28.128812 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 03:37:28.220831 master-0 kubenswrapper[33141]: I0308 03:37:28.220768 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-ftthh" Mar 08 03:37:28.250475 master-0 kubenswrapper[33141]: I0308 03:37:28.250405 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 03:37:28.262523 master-0 kubenswrapper[33141]: I0308 03:37:28.262483 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 03:37:28.266438 master-0 kubenswrapper[33141]: I0308 03:37:28.266401 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 03:37:28.295021 master-0 kubenswrapper[33141]: I0308 03:37:28.294883 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 03:37:28.307634 master-0 kubenswrapper[33141]: I0308 03:37:28.307556 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 03:37:28.314980 master-0 kubenswrapper[33141]: I0308 03:37:28.314941 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 08 03:37:28.467997 master-0 kubenswrapper[33141]: I0308 03:37:28.467945 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 03:37:28.494047 master-0 kubenswrapper[33141]: I0308 03:37:28.493991 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 08 03:37:28.520408 master-0 kubenswrapper[33141]: I0308 03:37:28.520339 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-lf8gs" Mar 08 03:37:28.596589 master-0 kubenswrapper[33141]: I0308 03:37:28.596448 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 08 03:37:28.643590 master-0 kubenswrapper[33141]: I0308 03:37:28.643523 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 08 03:37:28.675480 master-0 kubenswrapper[33141]: I0308 03:37:28.675408 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-d6gwq" Mar 08 03:37:28.723985 master-0 kubenswrapper[33141]: I0308 03:37:28.723915 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 08 03:37:28.737411 master-0 kubenswrapper[33141]: I0308 03:37:28.737345 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 03:37:28.776860 master-0 kubenswrapper[33141]: I0308 03:37:28.776790 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 08 03:37:28.840545 master-0 kubenswrapper[33141]: I0308 03:37:28.840449 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 03:37:28.884198 master-0 kubenswrapper[33141]: I0308 03:37:28.884039 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 03:37:28.904098 master-0 kubenswrapper[33141]: I0308 03:37:28.903871 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 03:37:28.952452 master-0 kubenswrapper[33141]: I0308 03:37:28.952335 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 03:37:29.111102 master-0 kubenswrapper[33141]: I0308 03:37:29.111031 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 03:37:29.127113 master-0 kubenswrapper[33141]: I0308 03:37:29.127003 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-gqqgx" Mar 08 03:37:29.137232 master-0 kubenswrapper[33141]: I0308 03:37:29.137102 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 03:37:29.172899 master-0 kubenswrapper[33141]: I0308 03:37:29.172819 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 03:37:29.209979 master-0 kubenswrapper[33141]: I0308 03:37:29.209885 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 08 03:37:29.226242 master-0 kubenswrapper[33141]: I0308 03:37:29.226190 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 08 03:37:29.233733 master-0 kubenswrapper[33141]: I0308 03:37:29.233662 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 08 03:37:29.326794 master-0 kubenswrapper[33141]: I0308 03:37:29.326737 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 08 03:37:29.388468 master-0 kubenswrapper[33141]: I0308 03:37:29.388340 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-p5nps" Mar 08 03:37:29.421325 master-0 kubenswrapper[33141]: I0308 03:37:29.421275 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 03:37:29.460419 master-0 kubenswrapper[33141]: I0308 03:37:29.460367 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 03:37:29.480180 master-0 kubenswrapper[33141]: I0308 03:37:29.480118 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 08 03:37:29.517796 master-0 kubenswrapper[33141]: I0308 03:37:29.517737 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 03:37:29.559291 master-0 kubenswrapper[33141]: I0308 03:37:29.559223 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 03:37:29.563892 master-0 kubenswrapper[33141]: I0308 03:37:29.563851 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 03:37:29.571669 master-0 kubenswrapper[33141]: I0308 03:37:29.571629 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 03:37:29.612599 master-0 kubenswrapper[33141]: I0308 03:37:29.612524 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 08 03:37:29.693062 master-0 kubenswrapper[33141]: I0308 03:37:29.692876 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 03:37:29.736001 master-0 kubenswrapper[33141]: I0308 03:37:29.735874 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-9h99f" Mar 08 03:37:29.783355 master-0 kubenswrapper[33141]: I0308 03:37:29.783256 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 03:37:29.790858 master-0 kubenswrapper[33141]: I0308 03:37:29.790800 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 03:37:29.795636 master-0 kubenswrapper[33141]: I0308 03:37:29.795584 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 08 03:37:29.796002 master-0 kubenswrapper[33141]: I0308 03:37:29.795963 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:37:29.865935 master-0 kubenswrapper[33141]: I0308 03:37:29.865856 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-46c6c" Mar 08 03:37:29.879834 master-0 kubenswrapper[33141]: I0308 03:37:29.879780 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-nqkx9" Mar 08 03:37:29.913792 master-0 kubenswrapper[33141]: I0308 03:37:29.913705 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 03:37:29.943641 master-0 kubenswrapper[33141]: I0308 03:37:29.943503 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 03:37:29.956076 master-0 kubenswrapper[33141]: I0308 03:37:29.956015 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 03:37:29.976256 master-0 kubenswrapper[33141]: I0308 03:37:29.976178 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 03:37:30.013168 master-0 kubenswrapper[33141]: I0308 03:37:30.013103 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 03:37:30.036546 master-0 kubenswrapper[33141]: I0308 03:37:30.036478 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 08 03:37:30.083174 master-0 kubenswrapper[33141]: I0308 03:37:30.083106 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-pz6cl" Mar 08 03:37:30.111414 master-0 kubenswrapper[33141]: I0308 03:37:30.111353 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 03:37:30.123673 master-0 kubenswrapper[33141]: I0308 03:37:30.123604 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 03:37:30.171827 master-0 kubenswrapper[33141]: I0308 03:37:30.171743 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-wvdjh" Mar 08 03:37:30.209887 master-0 kubenswrapper[33141]: I0308 03:37:30.209730 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 08 03:37:30.382839 master-0 kubenswrapper[33141]: I0308 03:37:30.382723 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 03:37:30.400405 master-0 kubenswrapper[33141]: I0308 03:37:30.400333 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 03:37:30.411836 master-0 kubenswrapper[33141]: I0308 03:37:30.411746 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 03:37:30.422954 master-0 kubenswrapper[33141]: I0308 03:37:30.422843 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 08 03:37:30.456727 master-0 kubenswrapper[33141]: I0308 03:37:30.456617 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 08 03:37:30.468445 master-0 kubenswrapper[33141]: I0308 03:37:30.468281 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 03:37:30.612926 master-0 kubenswrapper[33141]: I0308 03:37:30.612806 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-mw5z6" Mar 08 03:37:30.623384 master-0 kubenswrapper[33141]: I0308 03:37:30.623316 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 08 03:37:30.680978 master-0 kubenswrapper[33141]: I0308 03:37:30.680831 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-8hcmx" Mar 08 03:37:30.741521 master-0 kubenswrapper[33141]: I0308 03:37:30.741371 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 03:37:30.762526 master-0 kubenswrapper[33141]: I0308 03:37:30.762448 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vbs7r" Mar 08 03:37:30.782102 master-0 kubenswrapper[33141]: I0308 03:37:30.782006 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 03:37:30.785845 master-0 kubenswrapper[33141]: I0308 03:37:30.785797 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9c93c1bm2nqd1" Mar 08 03:37:30.831645 master-0 kubenswrapper[33141]: I0308 03:37:30.831541 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 03:37:30.864851 master-0 kubenswrapper[33141]: I0308 03:37:30.861945 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 08 03:37:30.865715 master-0 kubenswrapper[33141]: I0308 03:37:30.865631 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 03:37:30.928778 master-0 kubenswrapper[33141]: I0308 03:37:30.928683 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 03:37:30.958395 master-0 kubenswrapper[33141]: I0308 03:37:30.958295 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 03:37:30.980803 master-0 kubenswrapper[33141]: I0308 03:37:30.980715 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 03:37:31.073786 master-0 kubenswrapper[33141]: I0308 03:37:31.073699 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-9gswq" Mar 08 03:37:31.088210 master-0 kubenswrapper[33141]: I0308 03:37:31.088172 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 03:37:31.162928 master-0 kubenswrapper[33141]: I0308 03:37:31.162834 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 03:37:31.198808 master-0 kubenswrapper[33141]: I0308 03:37:31.198706 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 08 03:37:31.209712 master-0 kubenswrapper[33141]: I0308 03:37:31.209653 33141 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 03:37:31.218680 master-0 kubenswrapper[33141]: I0308 03:37:31.218608 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-99d2o2jhvt58t" Mar 08 03:37:31.231894 master-0 kubenswrapper[33141]: I0308 03:37:31.231850 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 03:37:31.284042 master-0 kubenswrapper[33141]: I0308 03:37:31.283970 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 08 03:37:31.352758 master-0 kubenswrapper[33141]: I0308 03:37:31.352587 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 03:37:31.367688 master-0 kubenswrapper[33141]: I0308 03:37:31.367599 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 03:37:31.461939 master-0 kubenswrapper[33141]: I0308 03:37:31.461831 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 08 03:37:31.462272 master-0 kubenswrapper[33141]: I0308 03:37:31.462195 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 08 03:37:31.462358 master-0 kubenswrapper[33141]: I0308 03:37:31.462336 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h4sjt" Mar 08 03:37:31.476642 master-0 kubenswrapper[33141]: I0308 03:37:31.476571 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 03:37:31.478658 master-0 kubenswrapper[33141]: I0308 03:37:31.478613 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 08 03:37:31.522356 master-0 kubenswrapper[33141]: I0308 03:37:31.522292 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 08 03:37:31.545330 master-0 kubenswrapper[33141]: I0308 03:37:31.545270 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 03:37:31.617752 master-0 kubenswrapper[33141]: I0308 03:37:31.617587 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-rsc8q" Mar 08 03:37:31.679932 master-0 kubenswrapper[33141]: I0308 03:37:31.679843 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 08 03:37:31.726852 master-0 kubenswrapper[33141]: I0308 03:37:31.726764 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 03:37:31.752948 master-0 kubenswrapper[33141]: I0308 03:37:31.752868 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 08 03:37:31.819716 master-0 kubenswrapper[33141]: I0308 03:37:31.819639 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-vzsqv" Mar 08 03:37:31.826473 master-0 kubenswrapper[33141]: I0308 03:37:31.826431 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 03:37:31.838464 master-0 kubenswrapper[33141]: I0308 03:37:31.838401 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 08 03:37:31.849148 master-0 kubenswrapper[33141]: I0308 03:37:31.849090 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 08 03:37:31.863046 master-0 kubenswrapper[33141]: I0308 03:37:31.862971 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 08 03:37:31.930648 master-0 kubenswrapper[33141]: I0308 03:37:31.930439 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 08 03:37:32.099045 master-0 kubenswrapper[33141]: I0308 03:37:32.098976 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 08 03:37:32.114709 master-0 kubenswrapper[33141]: I0308 03:37:32.114637 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-pf6c2" Mar 08 03:37:32.176300 master-0 kubenswrapper[33141]: I0308 03:37:32.176201 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 03:37:32.187678 master-0 kubenswrapper[33141]: I0308 03:37:32.187548 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 03:37:32.272973 master-0 kubenswrapper[33141]: I0308 03:37:32.272609 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-278m6" Mar 08 03:37:32.288475 master-0 kubenswrapper[33141]: I0308 03:37:32.288168 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 03:37:32.348620 master-0 kubenswrapper[33141]: I0308 03:37:32.348302 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 08 03:37:32.356597 master-0 kubenswrapper[33141]: I0308 03:37:32.356539 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 03:37:32.367110 master-0 kubenswrapper[33141]: I0308 03:37:32.366069 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 03:37:32.398933 master-0 kubenswrapper[33141]: I0308 03:37:32.398825 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 08 03:37:32.400494 master-0 kubenswrapper[33141]: I0308 03:37:32.400437 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 03:37:32.488985 master-0 kubenswrapper[33141]: I0308 03:37:32.488750 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 03:37:32.526087 master-0 kubenswrapper[33141]: I0308 03:37:32.526005 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 03:37:32.587083 master-0 kubenswrapper[33141]: I0308 03:37:32.586879 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 03:37:32.622459 master-0 kubenswrapper[33141]: I0308 03:37:32.619844 33141 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 03:37:32.633080 master-0 kubenswrapper[33141]: I0308 03:37:32.632997 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:37:32.633080 master-0 kubenswrapper[33141]: I0308 03:37:32.633090 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 08 03:37:32.633627 master-0 kubenswrapper[33141]: I0308 03:37:32.633572 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:32.633627 master-0 kubenswrapper[33141]: I0308 03:37:32.633613 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="6e1b6ca8-8d9e-4bc5-9c19-35fc5367f1b7" Mar 08 03:37:32.641552 master-0 kubenswrapper[33141]: I0308 03:37:32.640862 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 08 03:37:32.654900 master-0 kubenswrapper[33141]: I0308 03:37:32.648798 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 08 03:37:32.668295 master-0 kubenswrapper[33141]: I0308 03:37:32.668198 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=19.668179119 podStartE2EDuration="19.668179119s" podCreationTimestamp="2026-03-08 03:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:37:32.666973828 +0000 UTC m=+366.536867111" watchObservedRunningTime="2026-03-08 03:37:32.668179119 +0000 UTC m=+366.538072322" Mar 08 03:37:32.687492 master-0 kubenswrapper[33141]: I0308 03:37:32.687419 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 03:37:32.698982 master-0 kubenswrapper[33141]: I0308 03:37:32.698872 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 08 03:37:32.728107 master-0 kubenswrapper[33141]: I0308 03:37:32.728052 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 08 03:37:32.839950 master-0 kubenswrapper[33141]: I0308 03:37:32.838146 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-d4zhc" Mar 08 03:37:32.845142 master-0 kubenswrapper[33141]: I0308 03:37:32.843381 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 03:37:32.848952 master-0 kubenswrapper[33141]: I0308 03:37:32.847316 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 03:37:32.855389 master-0 kubenswrapper[33141]: I0308 03:37:32.855317 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 03:37:32.855611 master-0 kubenswrapper[33141]: I0308 03:37:32.855319 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 03:37:32.916460 master-0 kubenswrapper[33141]: I0308 03:37:32.916379 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 08 03:37:32.975803 master-0 kubenswrapper[33141]: I0308 03:37:32.975747 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-h5rwm" Mar 08 03:37:33.027388 master-0 kubenswrapper[33141]: I0308 03:37:33.027326 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 03:37:33.106712 master-0 kubenswrapper[33141]: I0308 03:37:33.106551 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 08 03:37:33.191224 master-0 kubenswrapper[33141]: I0308 03:37:33.191156 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 08 03:37:33.352778 master-0 kubenswrapper[33141]: I0308 03:37:33.352714 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 08 03:37:33.382422 master-0 kubenswrapper[33141]: I0308 03:37:33.382290 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 03:37:33.471372 master-0 kubenswrapper[33141]: I0308 03:37:33.471287 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-ejf3rfa26fkl2" Mar 08 03:37:33.486638 master-0 kubenswrapper[33141]: I0308 03:37:33.486590 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-7hbhc" Mar 08 03:37:33.544822 master-0 kubenswrapper[33141]: I0308 03:37:33.544774 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 08 03:37:33.583437 master-0 kubenswrapper[33141]: I0308 03:37:33.583308 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:37:33.592658 master-0 kubenswrapper[33141]: I0308 03:37:33.592597 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:37:33.607483 master-0 kubenswrapper[33141]: I0308 03:37:33.607412 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:37:33.694951 master-0 kubenswrapper[33141]: I0308 03:37:33.694763 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 08 03:37:33.848298 master-0 kubenswrapper[33141]: I0308 03:37:33.848233 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 03:37:33.863573 master-0 kubenswrapper[33141]: I0308 03:37:33.863506 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 03:37:33.920599 master-0 kubenswrapper[33141]: I0308 03:37:33.920540 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 03:37:33.938311 master-0 kubenswrapper[33141]: I0308 03:37:33.938208 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 03:37:33.961547 master-0 kubenswrapper[33141]: I0308 03:37:33.961414 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 08 03:37:33.964633 master-0 kubenswrapper[33141]: I0308 03:37:33.964591 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 03:37:33.993481 master-0 kubenswrapper[33141]: I0308 03:37:33.993396 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 08 03:37:34.128105 master-0 kubenswrapper[33141]: I0308 03:37:34.128050 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 08 03:37:34.130957 master-0 kubenswrapper[33141]: I0308 03:37:34.130808 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 03:37:34.132092 master-0 kubenswrapper[33141]: I0308 03:37:34.131248 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 08 03:37:34.210469 master-0 kubenswrapper[33141]: I0308 03:37:34.210413 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-7gb49" Mar 08 03:37:34.349813 master-0 kubenswrapper[33141]: I0308 03:37:34.349743 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 08 03:37:34.389503 master-0 kubenswrapper[33141]: I0308 03:37:34.389407 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 08 03:37:34.394725 master-0 kubenswrapper[33141]: I0308 03:37:34.394666 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 03:37:34.473942 master-0 kubenswrapper[33141]: I0308 03:37:34.473844 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 03:37:34.494455 master-0 kubenswrapper[33141]: I0308 03:37:34.494382 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 08 03:37:34.551422 master-0 kubenswrapper[33141]: I0308 03:37:34.551330 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 08 03:37:34.571199 master-0 kubenswrapper[33141]: I0308 03:37:34.571118 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 03:37:34.574578 master-0 kubenswrapper[33141]: I0308 03:37:34.574510 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 08 03:37:34.616372 master-0 kubenswrapper[33141]: I0308 03:37:34.615526 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:37:34.622269 master-0 kubenswrapper[33141]: I0308 03:37:34.622177 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:37:34.663320 master-0 kubenswrapper[33141]: I0308 03:37:34.663043 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 03:37:34.674686 master-0 kubenswrapper[33141]: I0308 03:37:34.674600 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 03:37:34.686446 master-0 kubenswrapper[33141]: I0308 03:37:34.686382 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 08 03:37:34.727756 master-0 kubenswrapper[33141]: I0308 03:37:34.727690 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 08 03:37:34.756870 master-0 kubenswrapper[33141]: I0308 03:37:34.756806 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 03:37:34.825344 master-0 kubenswrapper[33141]: I0308 03:37:34.825266 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 03:37:34.839618 master-0 kubenswrapper[33141]: I0308 03:37:34.839558 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 08 03:37:34.850378 master-0 kubenswrapper[33141]: I0308 03:37:34.850325 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-t6pd7" Mar 08 03:37:34.852639 master-0 kubenswrapper[33141]: I0308 03:37:34.852587 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 03:37:34.911366 master-0 kubenswrapper[33141]: I0308 03:37:34.911179 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 03:37:34.918018 master-0 kubenswrapper[33141]: I0308 03:37:34.917969 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 03:37:35.011040 master-0 kubenswrapper[33141]: I0308 03:37:35.010935 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 03:37:35.035383 master-0 kubenswrapper[33141]: I0308 03:37:35.035299 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 08 03:37:35.035633 master-0 kubenswrapper[33141]: I0308 03:37:35.035519 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 03:37:35.076433 master-0 kubenswrapper[33141]: I0308 03:37:35.076364 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 03:37:35.077105 master-0 kubenswrapper[33141]: I0308 03:37:35.077054 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 03:37:35.082452 master-0 kubenswrapper[33141]: I0308 03:37:35.082393 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 03:37:35.208607 master-0 kubenswrapper[33141]: I0308 03:37:35.208443 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 03:37:35.215790 master-0 kubenswrapper[33141]: I0308 03:37:35.215708 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 03:37:35.217794 master-0 kubenswrapper[33141]: I0308 03:37:35.217715 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 03:37:35.252954 master-0 kubenswrapper[33141]: I0308 03:37:35.252860 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 08 03:37:35.257551 master-0 kubenswrapper[33141]: I0308 03:37:35.257493 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 03:37:35.275345 master-0 kubenswrapper[33141]: I0308 03:37:35.275251 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 08 03:37:35.279052 master-0 kubenswrapper[33141]: I0308 03:37:35.278997 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 03:37:35.533612 master-0 kubenswrapper[33141]: I0308 03:37:35.533567 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-dqqnp" Mar 08 03:37:35.602471 master-0 kubenswrapper[33141]: I0308 03:37:35.602413 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 03:37:35.669481 master-0 kubenswrapper[33141]: I0308 03:37:35.669407 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 03:37:35.681989 master-0 kubenswrapper[33141]: I0308 03:37:35.681948 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 03:37:35.694271 master-0 kubenswrapper[33141]: I0308 03:37:35.694231 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 08 03:37:35.699183 master-0 kubenswrapper[33141]: I0308 03:37:35.699133 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 03:37:35.710671 master-0 kubenswrapper[33141]: I0308 03:37:35.710610 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 03:37:35.730190 master-0 kubenswrapper[33141]: I0308 03:37:35.730129 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 08 03:37:35.758047 master-0 kubenswrapper[33141]: I0308 03:37:35.757965 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 08 03:37:35.758368 master-0 kubenswrapper[33141]: I0308 03:37:35.758319 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" containerID="cri-o://52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af" gracePeriod=5 Mar 08 03:37:35.792950 master-0 kubenswrapper[33141]: I0308 03:37:35.792783 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 03:37:35.967117 master-0 kubenswrapper[33141]: I0308 03:37:35.967042 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 08 03:37:35.990449 master-0 kubenswrapper[33141]: I0308 03:37:35.990390 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 03:37:36.038359 master-0 kubenswrapper[33141]: I0308 03:37:36.038271 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 03:37:36.043882 master-0 kubenswrapper[33141]: I0308 03:37:36.043662 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-qnnnr" Mar 08 03:37:36.183158 master-0 kubenswrapper[33141]: I0308 03:37:36.183104 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-g676s" Mar 08 03:37:36.222757 master-0 kubenswrapper[33141]: I0308 03:37:36.222698 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 08 03:37:36.335421 master-0 kubenswrapper[33141]: I0308 03:37:36.335311 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 08 03:37:36.402963 master-0 kubenswrapper[33141]: I0308 03:37:36.400928 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 03:37:36.433721 master-0 kubenswrapper[33141]: I0308 03:37:36.433669 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 03:37:36.507936 master-0 kubenswrapper[33141]: I0308 03:37:36.504947 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 03:37:36.511886 master-0 kubenswrapper[33141]: I0308 03:37:36.511840 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:37:36.579316 master-0 kubenswrapper[33141]: I0308 03:37:36.579268 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 03:37:36.887385 master-0 kubenswrapper[33141]: I0308 03:37:36.887325 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 03:37:37.052234 master-0 kubenswrapper[33141]: I0308 03:37:37.052184 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 03:37:37.063802 master-0 kubenswrapper[33141]: I0308 03:37:37.063754 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 03:37:37.072339 master-0 kubenswrapper[33141]: I0308 03:37:37.072284 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 03:37:37.101501 master-0 kubenswrapper[33141]: I0308 03:37:37.101441 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 08 03:37:37.143602 master-0 kubenswrapper[33141]: I0308 03:37:37.143466 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-fvhvd" Mar 08 03:37:37.153974 master-0 kubenswrapper[33141]: I0308 03:37:37.153625 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 03:37:37.185711 master-0 kubenswrapper[33141]: I0308 03:37:37.185650 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 03:37:37.341890 master-0 kubenswrapper[33141]: I0308 03:37:37.341817 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 03:37:37.406167 master-0 kubenswrapper[33141]: I0308 03:37:37.406027 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 08 03:37:37.425721 master-0 kubenswrapper[33141]: I0308 03:37:37.425597 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 08 03:37:37.433750 master-0 kubenswrapper[33141]: I0308 03:37:37.433692 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 08 03:37:37.438295 master-0 kubenswrapper[33141]: I0308 03:37:37.438243 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:37:37.472985 master-0 kubenswrapper[33141]: I0308 03:37:37.472899 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 03:37:37.507414 master-0 kubenswrapper[33141]: I0308 03:37:37.507353 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 03:37:37.631124 master-0 kubenswrapper[33141]: I0308 03:37:37.631060 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 03:37:37.642440 master-0 kubenswrapper[33141]: I0308 03:37:37.642388 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 03:37:37.684923 master-0 kubenswrapper[33141]: I0308 03:37:37.684800 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 08 03:37:37.764467 master-0 kubenswrapper[33141]: I0308 03:37:37.764405 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 03:37:37.771328 master-0 kubenswrapper[33141]: I0308 03:37:37.771303 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 08 03:37:37.821854 master-0 kubenswrapper[33141]: I0308 03:37:37.821801 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 08 03:37:37.861198 master-0 kubenswrapper[33141]: I0308 03:37:37.861139 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 03:37:37.943771 master-0 kubenswrapper[33141]: I0308 03:37:37.943639 33141 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 03:37:38.020006 master-0 kubenswrapper[33141]: I0308 03:37:38.019948 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 03:37:38.042075 master-0 kubenswrapper[33141]: I0308 03:37:38.042018 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-da0kci31im4hq" Mar 08 03:37:38.170550 master-0 kubenswrapper[33141]: I0308 03:37:38.170457 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 03:37:38.200220 master-0 kubenswrapper[33141]: I0308 03:37:38.200036 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 03:37:38.201233 master-0 kubenswrapper[33141]: I0308 03:37:38.201133 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 08 03:37:38.252225 master-0 kubenswrapper[33141]: I0308 03:37:38.252152 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 03:37:38.305811 master-0 kubenswrapper[33141]: I0308 03:37:38.305734 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 08 03:37:38.466751 master-0 kubenswrapper[33141]: I0308 03:37:38.466532 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 03:37:38.496681 master-0 kubenswrapper[33141]: I0308 03:37:38.496563 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 03:37:38.611290 master-0 kubenswrapper[33141]: I0308 03:37:38.611227 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 08 03:37:38.629398 master-0 kubenswrapper[33141]: I0308 03:37:38.629328 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 03:37:38.853800 master-0 kubenswrapper[33141]: I0308 03:37:38.853719 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 03:37:38.855188 master-0 kubenswrapper[33141]: I0308 03:37:38.855135 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 03:37:38.901198 master-0 kubenswrapper[33141]: I0308 03:37:38.901126 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 08 03:37:38.968635 master-0 kubenswrapper[33141]: I0308 03:37:38.968551 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 03:37:39.140043 master-0 kubenswrapper[33141]: I0308 03:37:39.139880 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 08 03:37:39.379665 master-0 kubenswrapper[33141]: I0308 03:37:39.379566 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 08 03:37:39.896171 master-0 kubenswrapper[33141]: I0308 03:37:39.896100 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 03:37:40.066989 master-0 kubenswrapper[33141]: I0308 03:37:40.066937 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 03:37:41.363847 master-0 kubenswrapper[33141]: I0308 03:37:41.363781 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 08 03:37:41.364533 master-0 kubenswrapper[33141]: I0308 03:37:41.363941 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:37:41.469347 master-0 kubenswrapper[33141]: I0308 03:37:41.469267 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_b275ed7e9ce09d69a66613ca3ae3d89e/startup-monitor/0.log" Mar 08 03:37:41.469347 master-0 kubenswrapper[33141]: I0308 03:37:41.469337 33141 generic.go:334] "Generic (PLEG): container finished" podID="b275ed7e9ce09d69a66613ca3ae3d89e" containerID="52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af" exitCode=137 Mar 08 03:37:41.469645 master-0 kubenswrapper[33141]: I0308 03:37:41.469381 33141 scope.go:117] "RemoveContainer" containerID="52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af" Mar 08 03:37:41.469645 master-0 kubenswrapper[33141]: I0308 03:37:41.469452 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 08 03:37:41.492340 master-0 kubenswrapper[33141]: I0308 03:37:41.492284 33141 scope.go:117] "RemoveContainer" containerID="52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af" Mar 08 03:37:41.493403 master-0 kubenswrapper[33141]: E0308 03:37:41.493112 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af\": container with ID starting with 52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af not found: ID does not exist" containerID="52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af" Mar 08 03:37:41.493403 master-0 kubenswrapper[33141]: I0308 03:37:41.493178 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af"} err="failed to get container status \"52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af\": rpc error: code = NotFound desc = could not find container \"52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af\": container with ID starting with 52b24dedd6ca9d2345e91035f63a6ae995f5f8a2eef031a4da9b7f6c149d27af not found: ID does not exist" Mar 08 03:37:41.523554 master-0 kubenswrapper[33141]: I0308 03:37:41.523468 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 08 03:37:41.523775 master-0 kubenswrapper[33141]: I0308 03:37:41.523634 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 08 03:37:41.523775 master-0 kubenswrapper[33141]: I0308 03:37:41.523693 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 08 03:37:41.523775 master-0 kubenswrapper[33141]: I0308 03:37:41.523754 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 08 03:37:41.523920 master-0 kubenswrapper[33141]: I0308 03:37:41.523799 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") pod \"b275ed7e9ce09d69a66613ca3ae3d89e\" (UID: \"b275ed7e9ce09d69a66613ca3ae3d89e\") " Mar 08 03:37:41.524056 master-0 kubenswrapper[33141]: I0308 03:37:41.524025 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log" (OuterVolumeSpecName: "var-log") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:37:41.524146 master-0 kubenswrapper[33141]: I0308 03:37:41.524133 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock" (OuterVolumeSpecName: "var-lock") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:37:41.524224 master-0 kubenswrapper[33141]: I0308 03:37:41.524212 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests" (OuterVolumeSpecName: "manifests") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:37:41.524285 master-0 kubenswrapper[33141]: I0308 03:37:41.524225 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:37:41.524360 master-0 kubenswrapper[33141]: I0308 03:37:41.524332 33141 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 08 03:37:41.524405 master-0 kubenswrapper[33141]: I0308 03:37:41.524364 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:37:41.529467 master-0 kubenswrapper[33141]: I0308 03:37:41.529418 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "b275ed7e9ce09d69a66613ca3ae3d89e" (UID: "b275ed7e9ce09d69a66613ca3ae3d89e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:37:41.625538 master-0 kubenswrapper[33141]: I0308 03:37:41.625445 33141 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 08 03:37:41.625538 master-0 kubenswrapper[33141]: I0308 03:37:41.625494 33141 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:37:41.625538 master-0 kubenswrapper[33141]: I0308 03:37:41.625512 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b275ed7e9ce09d69a66613ca3ae3d89e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:37:42.368125 master-0 kubenswrapper[33141]: I0308 03:37:42.367427 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" path="/var/lib/kubelet/pods/b275ed7e9ce09d69a66613ca3ae3d89e/volumes" Mar 08 03:37:59.347842 master-0 kubenswrapper[33141]: I0308 03:37:59.347713 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 03:38:01.550711 master-0 kubenswrapper[33141]: I0308 03:38:01.550628 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-748f76c866-99l2l" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" containerID="cri-o://e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470" gracePeriod=15 Mar 08 03:38:01.973078 master-0 kubenswrapper[33141]: I0308 03:38:01.973039 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-748f76c866-99l2l_04802a97-e959-423f-8ca7-4a8fb5e7e047/console/0.log" Mar 08 03:38:01.973247 master-0 kubenswrapper[33141]: I0308 03:38:01.973114 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:38:02.045159 master-0 kubenswrapper[33141]: I0308 03:38:02.045061 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045515 master-0 kubenswrapper[33141]: I0308 03:38:02.045171 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045515 master-0 kubenswrapper[33141]: I0308 03:38:02.045467 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045672 master-0 kubenswrapper[33141]: I0308 03:38:02.045549 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045672 master-0 kubenswrapper[33141]: I0308 03:38:02.045616 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045812 master-0 kubenswrapper[33141]: I0308 03:38:02.045694 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.045812 master-0 kubenswrapper[33141]: I0308 03:38:02.045768 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdqpx\" (UniqueName: \"kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx\") pod \"04802a97-e959-423f-8ca7-4a8fb5e7e047\" (UID: \"04802a97-e959-423f-8ca7-4a8fb5e7e047\") " Mar 08 03:38:02.046010 master-0 kubenswrapper[33141]: I0308 03:38:02.045866 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:02.046321 master-0 kubenswrapper[33141]: I0308 03:38:02.046267 33141 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.046638 master-0 kubenswrapper[33141]: I0308 03:38:02.046563 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config" (OuterVolumeSpecName: "console-config") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:02.048058 master-0 kubenswrapper[33141]: I0308 03:38:02.047992 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:02.048408 master-0 kubenswrapper[33141]: I0308 03:38:02.048305 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca" (OuterVolumeSpecName: "service-ca") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:02.050629 master-0 kubenswrapper[33141]: I0308 03:38:02.049804 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:02.050629 master-0 kubenswrapper[33141]: I0308 03:38:02.049882 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:02.051822 master-0 kubenswrapper[33141]: I0308 03:38:02.051666 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx" (OuterVolumeSpecName: "kube-api-access-sdqpx") pod "04802a97-e959-423f-8ca7-4a8fb5e7e047" (UID: "04802a97-e959-423f-8ca7-4a8fb5e7e047"). InnerVolumeSpecName "kube-api-access-sdqpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:38:02.148742 master-0 kubenswrapper[33141]: I0308 03:38:02.148629 33141 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.148742 master-0 kubenswrapper[33141]: I0308 03:38:02.148726 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdqpx\" (UniqueName: \"kubernetes.io/projected/04802a97-e959-423f-8ca7-4a8fb5e7e047-kube-api-access-sdqpx\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.148742 master-0 kubenswrapper[33141]: I0308 03:38:02.148759 33141 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.149235 master-0 kubenswrapper[33141]: I0308 03:38:02.148781 33141 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.149235 master-0 kubenswrapper[33141]: I0308 03:38:02.148802 33141 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.149235 master-0 kubenswrapper[33141]: I0308 03:38:02.148824 33141 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/04802a97-e959-423f-8ca7-4a8fb5e7e047-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:02.665423 master-0 kubenswrapper[33141]: I0308 03:38:02.665270 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-748f76c866-99l2l_04802a97-e959-423f-8ca7-4a8fb5e7e047/console/0.log" Mar 08 03:38:02.665423 master-0 kubenswrapper[33141]: I0308 03:38:02.665403 33141 generic.go:334] "Generic (PLEG): container finished" podID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerID="e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470" exitCode=2 Mar 08 03:38:02.666498 master-0 kubenswrapper[33141]: I0308 03:38:02.665461 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-748f76c866-99l2l" event={"ID":"04802a97-e959-423f-8ca7-4a8fb5e7e047","Type":"ContainerDied","Data":"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470"} Mar 08 03:38:02.666498 master-0 kubenswrapper[33141]: I0308 03:38:02.665509 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-748f76c866-99l2l" event={"ID":"04802a97-e959-423f-8ca7-4a8fb5e7e047","Type":"ContainerDied","Data":"71f051994fd419869febab55e4b9ee893ce52aa603dd3d24069a362a33529882"} Mar 08 03:38:02.666498 master-0 kubenswrapper[33141]: I0308 03:38:02.665545 33141 scope.go:117] "RemoveContainer" containerID="e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470" Mar 08 03:38:02.666498 master-0 kubenswrapper[33141]: I0308 03:38:02.665807 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-748f76c866-99l2l" Mar 08 03:38:02.693478 master-0 kubenswrapper[33141]: I0308 03:38:02.693416 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:38:02.696715 master-0 kubenswrapper[33141]: I0308 03:38:02.696652 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-748f76c866-99l2l"] Mar 08 03:38:02.698441 master-0 kubenswrapper[33141]: I0308 03:38:02.698411 33141 scope.go:117] "RemoveContainer" containerID="e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470" Mar 08 03:38:02.698897 master-0 kubenswrapper[33141]: E0308 03:38:02.698872 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470\": container with ID starting with e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470 not found: ID does not exist" containerID="e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470" Mar 08 03:38:02.698997 master-0 kubenswrapper[33141]: I0308 03:38:02.698901 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470"} err="failed to get container status \"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470\": rpc error: code = NotFound desc = could not find container \"e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470\": container with ID starting with e40d7fa49a8d7f37ef3d90985c612a11721227ed1c7f39ae68a15e680adcc470 not found: ID does not exist" Mar 08 03:38:04.366899 master-0 kubenswrapper[33141]: I0308 03:38:04.366773 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" path="/var/lib/kubelet/pods/04802a97-e959-423f-8ca7-4a8fb5e7e047/volumes" Mar 08 03:38:05.197469 master-0 kubenswrapper[33141]: I0308 03:38:05.197382 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 03:38:05.332643 master-0 kubenswrapper[33141]: I0308 03:38:05.332581 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 03:38:07.352301 master-0 kubenswrapper[33141]: I0308 03:38:07.352233 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 03:38:08.278294 master-0 kubenswrapper[33141]: I0308 03:38:08.278176 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-bhtmv" Mar 08 03:38:08.350970 master-0 kubenswrapper[33141]: I0308 03:38:08.350872 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 03:38:09.945201 master-0 kubenswrapper[33141]: I0308 03:38:09.945129 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 03:38:11.707674 master-0 kubenswrapper[33141]: I0308 03:38:11.707608 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 08 03:38:12.273080 master-0 kubenswrapper[33141]: I0308 03:38:12.273029 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 08 03:38:14.123367 master-0 kubenswrapper[33141]: I0308 03:38:14.123279 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: E0308 03:38:14.123671 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: I0308 03:38:14.123712 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: E0308 03:38:14.123735 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: I0308 03:38:14.123767 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: E0308 03:38:14.123797 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" containerName="installer" Mar 08 03:38:14.123974 master-0 kubenswrapper[33141]: I0308 03:38:14.123808 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" containerName="installer" Mar 08 03:38:14.124150 master-0 kubenswrapper[33141]: I0308 03:38:14.124054 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f" containerName="installer" Mar 08 03:38:14.124150 master-0 kubenswrapper[33141]: I0308 03:38:14.124108 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275ed7e9ce09d69a66613ca3ae3d89e" containerName="startup-monitor" Mar 08 03:38:14.124150 master-0 kubenswrapper[33141]: I0308 03:38:14.124129 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="04802a97-e959-423f-8ca7-4a8fb5e7e047" containerName="console" Mar 08 03:38:14.124790 master-0 kubenswrapper[33141]: I0308 03:38:14.124761 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.150838 master-0 kubenswrapper[33141]: I0308 03:38:14.150778 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:38:14.274566 master-0 kubenswrapper[33141]: I0308 03:38:14.274478 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.274806 master-0 kubenswrapper[33141]: I0308 03:38:14.274607 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfn7t\" (UniqueName: \"kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.274806 master-0 kubenswrapper[33141]: I0308 03:38:14.274686 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.274806 master-0 kubenswrapper[33141]: I0308 03:38:14.274745 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.274977 master-0 kubenswrapper[33141]: I0308 03:38:14.274828 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.275030 master-0 kubenswrapper[33141]: I0308 03:38:14.274997 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.275206 master-0 kubenswrapper[33141]: I0308 03:38:14.275154 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377413 master-0 kubenswrapper[33141]: I0308 03:38:14.377253 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfn7t\" (UniqueName: \"kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377620 master-0 kubenswrapper[33141]: I0308 03:38:14.377472 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377620 master-0 kubenswrapper[33141]: I0308 03:38:14.377533 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377620 master-0 kubenswrapper[33141]: I0308 03:38:14.377611 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377770 master-0 kubenswrapper[33141]: I0308 03:38:14.377737 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377816 master-0 kubenswrapper[33141]: I0308 03:38:14.377777 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.377861 master-0 kubenswrapper[33141]: I0308 03:38:14.377819 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.378989 master-0 kubenswrapper[33141]: I0308 03:38:14.378928 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.379172 master-0 kubenswrapper[33141]: I0308 03:38:14.379127 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.379336 master-0 kubenswrapper[33141]: I0308 03:38:14.379286 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.379414 master-0 kubenswrapper[33141]: I0308 03:38:14.379393 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.382286 master-0 kubenswrapper[33141]: I0308 03:38:14.382239 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.383301 master-0 kubenswrapper[33141]: I0308 03:38:14.383269 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.404007 master-0 kubenswrapper[33141]: I0308 03:38:14.403964 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfn7t\" (UniqueName: \"kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t\") pod \"console-79dfbb5ff-xk648\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.475147 master-0 kubenswrapper[33141]: I0308 03:38:14.475079 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:14.974928 master-0 kubenswrapper[33141]: W0308 03:38:14.963023 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4b6519_6725_4fc3_bb3b_f5e6e13a6592.slice/crio-dd5910d651e29cc9761f45dcbcef8b34e2ab51d60d8e4620e3a32e9f78ab8459 WatchSource:0}: Error finding container dd5910d651e29cc9761f45dcbcef8b34e2ab51d60d8e4620e3a32e9f78ab8459: Status 404 returned error can't find the container with id dd5910d651e29cc9761f45dcbcef8b34e2ab51d60d8e4620e3a32e9f78ab8459 Mar 08 03:38:14.974928 master-0 kubenswrapper[33141]: I0308 03:38:14.967306 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:38:15.789767 master-0 kubenswrapper[33141]: I0308 03:38:15.789703 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dfbb5ff-xk648" event={"ID":"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592","Type":"ContainerStarted","Data":"26ed18f456a0d83cb1b9c08e66787611a5b5be658aab613da6c7f0b5d2083b8d"} Mar 08 03:38:15.790618 master-0 kubenswrapper[33141]: I0308 03:38:15.790592 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dfbb5ff-xk648" event={"ID":"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592","Type":"ContainerStarted","Data":"dd5910d651e29cc9761f45dcbcef8b34e2ab51d60d8e4620e3a32e9f78ab8459"} Mar 08 03:38:15.821718 master-0 kubenswrapper[33141]: I0308 03:38:15.821610 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79dfbb5ff-xk648" podStartSLOduration=1.821582934 podStartE2EDuration="1.821582934s" podCreationTimestamp="2026-03-08 03:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:38:15.815214628 +0000 UTC m=+409.685107891" watchObservedRunningTime="2026-03-08 03:38:15.821582934 +0000 UTC m=+409.691476167" Mar 08 03:38:21.842965 master-0 kubenswrapper[33141]: I0308 03:38:21.842844 33141 generic.go:334] "Generic (PLEG): container finished" podID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerID="f76a1bff6446c8bbd3a34e5b92f198922251d11d225fb45f11ae978bed808876" exitCode=0 Mar 08 03:38:21.843529 master-0 kubenswrapper[33141]: I0308 03:38:21.842936 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" event={"ID":"1e82d678-b5bb-4aec-9b5d-435305e8bdc2","Type":"ContainerDied","Data":"f76a1bff6446c8bbd3a34e5b92f198922251d11d225fb45f11ae978bed808876"} Mar 08 03:38:21.928110 master-0 kubenswrapper[33141]: I0308 03:38:21.928053 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:38:22.111369 master-0 kubenswrapper[33141]: I0308 03:38:22.111206 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.111778 master-0 kubenswrapper[33141]: I0308 03:38:22.111743 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.112166 master-0 kubenswrapper[33141]: I0308 03:38:22.112132 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.112481 master-0 kubenswrapper[33141]: I0308 03:38:22.112440 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.112722 master-0 kubenswrapper[33141]: I0308 03:38:22.112678 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.113048 master-0 kubenswrapper[33141]: I0308 03:38:22.112958 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.113328 master-0 kubenswrapper[33141]: I0308 03:38:22.113285 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") pod \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\" (UID: \"1e82d678-b5bb-4aec-9b5d-435305e8bdc2\") " Mar 08 03:38:22.113760 master-0 kubenswrapper[33141]: I0308 03:38:22.113145 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:22.113760 master-0 kubenswrapper[33141]: I0308 03:38:22.113667 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log" (OuterVolumeSpecName: "audit-log") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:38:22.114388 master-0 kubenswrapper[33141]: I0308 03:38:22.114349 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:22.114943 master-0 kubenswrapper[33141]: I0308 03:38:22.114860 33141 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.115176 master-0 kubenswrapper[33141]: I0308 03:38:22.115138 33141 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.115498 master-0 kubenswrapper[33141]: I0308 03:38:22.115462 33141 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.115708 master-0 kubenswrapper[33141]: I0308 03:38:22.115175 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:22.117243 master-0 kubenswrapper[33141]: I0308 03:38:22.117151 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6" (OuterVolumeSpecName: "kube-api-access-ppbl6") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "kube-api-access-ppbl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:38:22.117975 master-0 kubenswrapper[33141]: I0308 03:38:22.117797 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:22.118190 master-0 kubenswrapper[33141]: I0308 03:38:22.118125 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "1e82d678-b5bb-4aec-9b5d-435305e8bdc2" (UID: "1e82d678-b5bb-4aec-9b5d-435305e8bdc2"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:22.217630 master-0 kubenswrapper[33141]: I0308 03:38:22.217447 33141 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.217630 master-0 kubenswrapper[33141]: I0308 03:38:22.217515 33141 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.217630 master-0 kubenswrapper[33141]: I0308 03:38:22.217540 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppbl6\" (UniqueName: \"kubernetes.io/projected/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-kube-api-access-ppbl6\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.217630 master-0 kubenswrapper[33141]: I0308 03:38:22.217560 33141 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1e82d678-b5bb-4aec-9b5d-435305e8bdc2-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:22.859896 master-0 kubenswrapper[33141]: I0308 03:38:22.859814 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" event={"ID":"1e82d678-b5bb-4aec-9b5d-435305e8bdc2","Type":"ContainerDied","Data":"005487746ccdf8af07cdeab4d2100f98db1e134d2cd05ee46be8a62328152f7d"} Mar 08 03:38:22.860420 master-0 kubenswrapper[33141]: I0308 03:38:22.859881 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" Mar 08 03:38:22.860420 master-0 kubenswrapper[33141]: I0308 03:38:22.859927 33141 scope.go:117] "RemoveContainer" containerID="f76a1bff6446c8bbd3a34e5b92f198922251d11d225fb45f11ae978bed808876" Mar 08 03:38:22.903051 master-0 kubenswrapper[33141]: I0308 03:38:22.902983 33141 patch_prober.go:28] interesting pod/metrics-server-6977dfbb45-dwjx9 container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.74:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 03:38:22.903229 master-0 kubenswrapper[33141]: I0308 03:38:22.903064 33141 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6977dfbb45-dwjx9" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerName="metrics-server" probeResult="failure" output="Get \"https://10.128.0.74:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 03:38:23.092267 master-0 kubenswrapper[33141]: I0308 03:38:23.092190 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:38:23.103194 master-0 kubenswrapper[33141]: I0308 03:38:23.103127 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-6977dfbb45-dwjx9"] Mar 08 03:38:24.366987 master-0 kubenswrapper[33141]: I0308 03:38:24.366876 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" path="/var/lib/kubelet/pods/1e82d678-b5bb-4aec-9b5d-435305e8bdc2/volumes" Mar 08 03:38:24.475511 master-0 kubenswrapper[33141]: I0308 03:38:24.475406 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:24.475511 master-0 kubenswrapper[33141]: I0308 03:38:24.475512 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:24.480778 master-0 kubenswrapper[33141]: I0308 03:38:24.480717 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:24.893724 master-0 kubenswrapper[33141]: I0308 03:38:24.893660 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:38:24.989820 master-0 kubenswrapper[33141]: I0308 03:38:24.989749 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:38:39.075963 master-0 kubenswrapper[33141]: I0308 03:38:39.075835 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 03:38:39.076873 master-0 kubenswrapper[33141]: E0308 03:38:39.076328 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerName="metrics-server" Mar 08 03:38:39.076873 master-0 kubenswrapper[33141]: I0308 03:38:39.076354 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerName="metrics-server" Mar 08 03:38:39.076873 master-0 kubenswrapper[33141]: I0308 03:38:39.076635 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e82d678-b5bb-4aec-9b5d-435305e8bdc2" containerName="metrics-server" Mar 08 03:38:39.077797 master-0 kubenswrapper[33141]: I0308 03:38:39.077450 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.081509 master-0 kubenswrapper[33141]: I0308 03:38:39.081436 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-2tj6k" Mar 08 03:38:39.081727 master-0 kubenswrapper[33141]: I0308 03:38:39.081654 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 03:38:39.128037 master-0 kubenswrapper[33141]: I0308 03:38:39.099524 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 03:38:39.129478 master-0 kubenswrapper[33141]: I0308 03:38:39.129433 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.129704 master-0 kubenswrapper[33141]: I0308 03:38:39.129636 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.129704 master-0 kubenswrapper[33141]: I0308 03:38:39.129685 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.231687 master-0 kubenswrapper[33141]: I0308 03:38:39.231620 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.231965 master-0 kubenswrapper[33141]: I0308 03:38:39.231726 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.231965 master-0 kubenswrapper[33141]: I0308 03:38:39.231776 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.231965 master-0 kubenswrapper[33141]: I0308 03:38:39.231807 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.231965 master-0 kubenswrapper[33141]: I0308 03:38:39.231806 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.251592 master-0 kubenswrapper[33141]: I0308 03:38:39.251550 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access\") pod \"installer-4-master-0\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.451220 master-0 kubenswrapper[33141]: I0308 03:38:39.451033 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:38:39.976140 master-0 kubenswrapper[33141]: I0308 03:38:39.976074 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 08 03:38:40.034482 master-0 kubenswrapper[33141]: I0308 03:38:40.034417 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"4789137f-dcfe-4afa-9f1e-91546be2c979","Type":"ContainerStarted","Data":"20eac49adc3fdfba262c6d581be3c93425c587ca9c06252c7121a77933a0d776"} Mar 08 03:38:41.049425 master-0 kubenswrapper[33141]: I0308 03:38:41.049329 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"4789137f-dcfe-4afa-9f1e-91546be2c979","Type":"ContainerStarted","Data":"13028b3dc6e0a9b6aac71e55250763d0b3ec7504976e2ced0eb6d5b166a8a90f"} Mar 08 03:38:41.081117 master-0 kubenswrapper[33141]: I0308 03:38:41.080833 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.080803984 podStartE2EDuration="2.080803984s" podCreationTimestamp="2026-03-08 03:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:38:41.072828847 +0000 UTC m=+434.942722080" watchObservedRunningTime="2026-03-08 03:38:41.080803984 +0000 UTC m=+434.950697217" Mar 08 03:38:50.043151 master-0 kubenswrapper[33141]: I0308 03:38:50.042991 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6fbfcd994f-49ft7" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" containerID="cri-o://c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d" gracePeriod=15 Mar 08 03:38:50.643113 master-0 kubenswrapper[33141]: I0308 03:38:50.643044 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fbfcd994f-49ft7_d3a1244d-2bc6-40c7-96c7-8e464a55ff4b/console/0.log" Mar 08 03:38:50.643387 master-0 kubenswrapper[33141]: I0308 03:38:50.643138 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:38:50.791558 master-0 kubenswrapper[33141]: I0308 03:38:50.791459 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.791895 master-0 kubenswrapper[33141]: I0308 03:38:50.791601 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.791895 master-0 kubenswrapper[33141]: I0308 03:38:50.791714 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tc8x\" (UniqueName: \"kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.792479 master-0 kubenswrapper[33141]: I0308 03:38:50.792384 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.793448 master-0 kubenswrapper[33141]: I0308 03:38:50.793355 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.793594 master-0 kubenswrapper[33141]: I0308 03:38:50.793455 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.793594 master-0 kubenswrapper[33141]: I0308 03:38:50.793484 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert\") pod \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\" (UID: \"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b\") " Mar 08 03:38:50.793817 master-0 kubenswrapper[33141]: I0308 03:38:50.793535 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:50.794440 master-0 kubenswrapper[33141]: I0308 03:38:50.794375 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config" (OuterVolumeSpecName: "console-config") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:50.794440 master-0 kubenswrapper[33141]: I0308 03:38:50.794407 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca" (OuterVolumeSpecName: "service-ca") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:50.794898 master-0 kubenswrapper[33141]: I0308 03:38:50.794796 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:38:50.795397 master-0 kubenswrapper[33141]: I0308 03:38:50.795338 33141 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.795641 master-0 kubenswrapper[33141]: I0308 03:38:50.795613 33141 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.795796 master-0 kubenswrapper[33141]: I0308 03:38:50.795772 33141 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.795985 master-0 kubenswrapper[33141]: I0308 03:38:50.795959 33141 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.797807 master-0 kubenswrapper[33141]: I0308 03:38:50.797741 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x" (OuterVolumeSpecName: "kube-api-access-5tc8x") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "kube-api-access-5tc8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:38:50.798374 master-0 kubenswrapper[33141]: I0308 03:38:50.798297 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:50.799152 master-0 kubenswrapper[33141]: I0308 03:38:50.799081 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" (UID: "d3a1244d-2bc6-40c7-96c7-8e464a55ff4b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:38:50.898530 master-0 kubenswrapper[33141]: I0308 03:38:50.898299 33141 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.898530 master-0 kubenswrapper[33141]: I0308 03:38:50.898399 33141 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:50.898530 master-0 kubenswrapper[33141]: I0308 03:38:50.898436 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tc8x\" (UniqueName: \"kubernetes.io/projected/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b-kube-api-access-5tc8x\") on node \"master-0\" DevicePath \"\"" Mar 08 03:38:51.176188 master-0 kubenswrapper[33141]: I0308 03:38:51.176008 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6fbfcd994f-49ft7_d3a1244d-2bc6-40c7-96c7-8e464a55ff4b/console/0.log" Mar 08 03:38:51.176188 master-0 kubenswrapper[33141]: I0308 03:38:51.176088 33141 generic.go:334] "Generic (PLEG): container finished" podID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerID="c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d" exitCode=2 Mar 08 03:38:51.176188 master-0 kubenswrapper[33141]: I0308 03:38:51.176134 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbfcd994f-49ft7" event={"ID":"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b","Type":"ContainerDied","Data":"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d"} Mar 08 03:38:51.176188 master-0 kubenswrapper[33141]: I0308 03:38:51.176180 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6fbfcd994f-49ft7" event={"ID":"d3a1244d-2bc6-40c7-96c7-8e464a55ff4b","Type":"ContainerDied","Data":"47d3f371c33823a483f2a669c21d59d08d6fdfe7d6cdeb4147f85bd9f5708416"} Mar 08 03:38:51.176188 master-0 kubenswrapper[33141]: I0308 03:38:51.176182 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6fbfcd994f-49ft7" Mar 08 03:38:51.177312 master-0 kubenswrapper[33141]: I0308 03:38:51.176198 33141 scope.go:117] "RemoveContainer" containerID="c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d" Mar 08 03:38:51.212570 master-0 kubenswrapper[33141]: I0308 03:38:51.212446 33141 scope.go:117] "RemoveContainer" containerID="c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d" Mar 08 03:38:51.213478 master-0 kubenswrapper[33141]: E0308 03:38:51.213015 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d\": container with ID starting with c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d not found: ID does not exist" containerID="c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d" Mar 08 03:38:51.213478 master-0 kubenswrapper[33141]: I0308 03:38:51.213072 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d"} err="failed to get container status \"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d\": rpc error: code = NotFound desc = could not find container \"c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d\": container with ID starting with c02386a3e18dc137b5769051229bfe72e1b873c5d5f713a6682227fadacb819d not found: ID does not exist" Mar 08 03:38:51.234264 master-0 kubenswrapper[33141]: I0308 03:38:51.234132 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:38:51.241617 master-0 kubenswrapper[33141]: I0308 03:38:51.240945 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6fbfcd994f-49ft7"] Mar 08 03:38:52.364921 master-0 kubenswrapper[33141]: I0308 03:38:52.364820 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" path="/var/lib/kubelet/pods/d3a1244d-2bc6-40c7-96c7-8e464a55ff4b/volumes" Mar 08 03:39:02.428811 master-0 kubenswrapper[33141]: I0308 03:39:02.425441 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:39:02.428811 master-0 kubenswrapper[33141]: E0308 03:39:02.425733 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" Mar 08 03:39:02.428811 master-0 kubenswrapper[33141]: I0308 03:39:02.425747 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" Mar 08 03:39:02.428811 master-0 kubenswrapper[33141]: I0308 03:39:02.425950 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a1244d-2bc6-40c7-96c7-8e464a55ff4b" containerName="console" Mar 08 03:39:02.428811 master-0 kubenswrapper[33141]: I0308 03:39:02.426475 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.462990 master-0 kubenswrapper[33141]: I0308 03:39:02.460680 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:39:02.520553 master-0 kubenswrapper[33141]: I0308 03:39:02.520491 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8x8\" (UniqueName: \"kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520564 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520609 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520672 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520692 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520731 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.520821 master-0 kubenswrapper[33141]: I0308 03:39:02.520755 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.622399 master-0 kubenswrapper[33141]: I0308 03:39:02.622318 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.622399 master-0 kubenswrapper[33141]: I0308 03:39:02.622371 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.622795 master-0 kubenswrapper[33141]: I0308 03:39:02.622569 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.622795 master-0 kubenswrapper[33141]: I0308 03:39:02.622641 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.622795 master-0 kubenswrapper[33141]: I0308 03:39:02.622792 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8x8\" (UniqueName: \"kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.623032 master-0 kubenswrapper[33141]: I0308 03:39:02.622860 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.623032 master-0 kubenswrapper[33141]: I0308 03:39:02.622953 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.624030 master-0 kubenswrapper[33141]: I0308 03:39:02.623250 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.624030 master-0 kubenswrapper[33141]: I0308 03:39:02.623933 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.624401 master-0 kubenswrapper[33141]: I0308 03:39:02.624349 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.624478 master-0 kubenswrapper[33141]: I0308 03:39:02.624387 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.627608 master-0 kubenswrapper[33141]: I0308 03:39:02.627561 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.627939 master-0 kubenswrapper[33141]: I0308 03:39:02.627878 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.647197 master-0 kubenswrapper[33141]: I0308 03:39:02.647150 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8x8\" (UniqueName: \"kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8\") pod \"console-75d8bd58cb-xqq9p\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:02.747113 master-0 kubenswrapper[33141]: I0308 03:39:02.746941 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:03.333190 master-0 kubenswrapper[33141]: W0308 03:39:03.333059 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5e710d0_27ee_4931_b0d6_1fe5e7e8215d.slice/crio-88580c37013cf3070eb145942280799e16a04eefa15e2a4e1179b4659a67636d WatchSource:0}: Error finding container 88580c37013cf3070eb145942280799e16a04eefa15e2a4e1179b4659a67636d: Status 404 returned error can't find the container with id 88580c37013cf3070eb145942280799e16a04eefa15e2a4e1179b4659a67636d Mar 08 03:39:03.337158 master-0 kubenswrapper[33141]: I0308 03:39:03.337091 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:39:04.314594 master-0 kubenswrapper[33141]: I0308 03:39:04.314488 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d8bd58cb-xqq9p" event={"ID":"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d","Type":"ContainerStarted","Data":"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864"} Mar 08 03:39:04.314594 master-0 kubenswrapper[33141]: I0308 03:39:04.314559 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d8bd58cb-xqq9p" event={"ID":"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d","Type":"ContainerStarted","Data":"88580c37013cf3070eb145942280799e16a04eefa15e2a4e1179b4659a67636d"} Mar 08 03:39:04.350169 master-0 kubenswrapper[33141]: I0308 03:39:04.350069 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75d8bd58cb-xqq9p" podStartSLOduration=2.350047874 podStartE2EDuration="2.350047874s" podCreationTimestamp="2026-03-08 03:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:39:04.345945127 +0000 UTC m=+458.215838330" watchObservedRunningTime="2026-03-08 03:39:04.350047874 +0000 UTC m=+458.219941077" Mar 08 03:39:12.747821 master-0 kubenswrapper[33141]: I0308 03:39:12.747688 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:12.747821 master-0 kubenswrapper[33141]: I0308 03:39:12.747797 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:12.757222 master-0 kubenswrapper[33141]: I0308 03:39:12.757126 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:13.402542 master-0 kubenswrapper[33141]: I0308 03:39:13.402466 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:39:13.480376 master-0 kubenswrapper[33141]: I0308 03:39:13.479931 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:39:13.490548 master-0 kubenswrapper[33141]: I0308 03:39:13.490415 33141 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:39:13.491023 master-0 kubenswrapper[33141]: I0308 03:39:13.490855 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" containerName="cluster-policy-controller" containerID="cri-o://5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" gracePeriod=30 Mar 08 03:39:13.491023 master-0 kubenswrapper[33141]: I0308 03:39:13.490951 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" gracePeriod=30 Mar 08 03:39:13.491137 master-0 kubenswrapper[33141]: I0308 03:39:13.490992 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" containerID="cri-o://332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" gracePeriod=30 Mar 08 03:39:13.491255 master-0 kubenswrapper[33141]: I0308 03:39:13.490927 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" gracePeriod=30 Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.495805 33141 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496167 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496184 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496196 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="cluster-policy-controller" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496205 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="cluster-policy-controller" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496237 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-recovery-controller" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496247 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-recovery-controller" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496266 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496274 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496292 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-cert-syncer" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496302 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-cert-syncer" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: E0308 03:39:13.496315 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496323 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496489 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496510 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496523 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496545 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-cert-syncer" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496562 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="cluster-policy-controller" Mar 08 03:39:13.507438 master-0 kubenswrapper[33141]: I0308 03:39:13.496575 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80fb58c61b036bc2179d84399404132" containerName="kube-controller-manager-recovery-controller" Mar 08 03:39:13.661879 master-0 kubenswrapper[33141]: I0308 03:39:13.661770 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.662249 master-0 kubenswrapper[33141]: I0308 03:39:13.662200 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.764021 master-0 kubenswrapper[33141]: I0308 03:39:13.763869 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.765180 master-0 kubenswrapper[33141]: I0308 03:39:13.764081 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.765180 master-0 kubenswrapper[33141]: I0308 03:39:13.764174 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.765180 master-0 kubenswrapper[33141]: I0308 03:39:13.764236 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/021a99d52e4f3f6d8ed4d016669c0eb8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"021a99d52e4f3f6d8ed4d016669c0eb8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.774309 master-0 kubenswrapper[33141]: I0308 03:39:13.774196 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/1.log" Mar 08 03:39:13.776475 master-0 kubenswrapper[33141]: I0308 03:39:13.776395 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager-cert-syncer/0.log" Mar 08 03:39:13.777437 master-0 kubenswrapper[33141]: I0308 03:39:13.777383 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:13.782609 master-0 kubenswrapper[33141]: I0308 03:39:13.782475 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="d80fb58c61b036bc2179d84399404132" podUID="021a99d52e4f3f6d8ed4d016669c0eb8" Mar 08 03:39:13.968456 master-0 kubenswrapper[33141]: I0308 03:39:13.966878 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir\") pod \"d80fb58c61b036bc2179d84399404132\" (UID: \"d80fb58c61b036bc2179d84399404132\") " Mar 08 03:39:13.968456 master-0 kubenswrapper[33141]: I0308 03:39:13.967257 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir\") pod \"d80fb58c61b036bc2179d84399404132\" (UID: \"d80fb58c61b036bc2179d84399404132\") " Mar 08 03:39:13.968456 master-0 kubenswrapper[33141]: I0308 03:39:13.968148 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "d80fb58c61b036bc2179d84399404132" (UID: "d80fb58c61b036bc2179d84399404132"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:39:13.968456 master-0 kubenswrapper[33141]: I0308 03:39:13.968170 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "d80fb58c61b036bc2179d84399404132" (UID: "d80fb58c61b036bc2179d84399404132"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:39:14.069304 master-0 kubenswrapper[33141]: I0308 03:39:14.069231 33141 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:14.069304 master-0 kubenswrapper[33141]: I0308 03:39:14.069283 33141 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d80fb58c61b036bc2179d84399404132-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:14.364755 master-0 kubenswrapper[33141]: I0308 03:39:14.364681 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d80fb58c61b036bc2179d84399404132" path="/var/lib/kubelet/pods/d80fb58c61b036bc2179d84399404132/volumes" Mar 08 03:39:14.410252 master-0 kubenswrapper[33141]: I0308 03:39:14.410204 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager/1.log" Mar 08 03:39:14.411307 master-0 kubenswrapper[33141]: I0308 03:39:14.411266 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_d80fb58c61b036bc2179d84399404132/kube-controller-manager-cert-syncer/0.log" Mar 08 03:39:14.411885 master-0 kubenswrapper[33141]: I0308 03:39:14.411848 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" exitCode=0 Mar 08 03:39:14.411948 master-0 kubenswrapper[33141]: I0308 03:39:14.411888 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" exitCode=0 Mar 08 03:39:14.411948 master-0 kubenswrapper[33141]: I0308 03:39:14.411929 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" exitCode=2 Mar 08 03:39:14.411948 master-0 kubenswrapper[33141]: I0308 03:39:14.411945 33141 generic.go:334] "Generic (PLEG): container finished" podID="d80fb58c61b036bc2179d84399404132" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" exitCode=0 Mar 08 03:39:14.412045 master-0 kubenswrapper[33141]: I0308 03:39:14.411989 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:14.412105 master-0 kubenswrapper[33141]: I0308 03:39:14.411990 33141 scope.go:117] "RemoveContainer" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.413825 master-0 kubenswrapper[33141]: I0308 03:39:14.413773 33141 generic.go:334] "Generic (PLEG): container finished" podID="4789137f-dcfe-4afa-9f1e-91546be2c979" containerID="13028b3dc6e0a9b6aac71e55250763d0b3ec7504976e2ced0eb6d5b166a8a90f" exitCode=0 Mar 08 03:39:14.413894 master-0 kubenswrapper[33141]: I0308 03:39:14.413828 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"4789137f-dcfe-4afa-9f1e-91546be2c979","Type":"ContainerDied","Data":"13028b3dc6e0a9b6aac71e55250763d0b3ec7504976e2ced0eb6d5b166a8a90f"} Mar 08 03:39:14.417636 master-0 kubenswrapper[33141]: I0308 03:39:14.417585 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="d80fb58c61b036bc2179d84399404132" podUID="021a99d52e4f3f6d8ed4d016669c0eb8" Mar 08 03:39:14.437293 master-0 kubenswrapper[33141]: I0308 03:39:14.437213 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.447026 master-0 kubenswrapper[33141]: I0308 03:39:14.446952 33141 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="d80fb58c61b036bc2179d84399404132" podUID="021a99d52e4f3f6d8ed4d016669c0eb8" Mar 08 03:39:14.470244 master-0 kubenswrapper[33141]: I0308 03:39:14.470182 33141 scope.go:117] "RemoveContainer" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.489154 master-0 kubenswrapper[33141]: I0308 03:39:14.489100 33141 scope.go:117] "RemoveContainer" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.505517 master-0 kubenswrapper[33141]: I0308 03:39:14.505273 33141 scope.go:117] "RemoveContainer" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.519499 master-0 kubenswrapper[33141]: I0308 03:39:14.519461 33141 scope.go:117] "RemoveContainer" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.519831 master-0 kubenswrapper[33141]: E0308 03:39:14.519804 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": container with ID starting with 332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab not found: ID does not exist" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.519925 master-0 kubenswrapper[33141]: I0308 03:39:14.519842 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab"} err="failed to get container status \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": rpc error: code = NotFound desc = could not find container \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": container with ID starting with 332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab not found: ID does not exist" Mar 08 03:39:14.519925 master-0 kubenswrapper[33141]: I0308 03:39:14.519888 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.520274 master-0 kubenswrapper[33141]: E0308 03:39:14.520217 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": container with ID starting with 7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9 not found: ID does not exist" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.520339 master-0 kubenswrapper[33141]: I0308 03:39:14.520294 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} err="failed to get container status \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": rpc error: code = NotFound desc = could not find container \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": container with ID starting with 7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9 not found: ID does not exist" Mar 08 03:39:14.520403 master-0 kubenswrapper[33141]: I0308 03:39:14.520337 33141 scope.go:117] "RemoveContainer" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.520923 master-0 kubenswrapper[33141]: E0308 03:39:14.520870 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": container with ID starting with 8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07 not found: ID does not exist" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.521004 master-0 kubenswrapper[33141]: I0308 03:39:14.520954 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07"} err="failed to get container status \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": rpc error: code = NotFound desc = could not find container \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": container with ID starting with 8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07 not found: ID does not exist" Mar 08 03:39:14.521004 master-0 kubenswrapper[33141]: I0308 03:39:14.520983 33141 scope.go:117] "RemoveContainer" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.521215 master-0 kubenswrapper[33141]: E0308 03:39:14.521189 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": container with ID starting with b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5 not found: ID does not exist" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.521289 master-0 kubenswrapper[33141]: I0308 03:39:14.521237 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5"} err="failed to get container status \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": rpc error: code = NotFound desc = could not find container \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": container with ID starting with b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5 not found: ID does not exist" Mar 08 03:39:14.521289 master-0 kubenswrapper[33141]: I0308 03:39:14.521255 33141 scope.go:117] "RemoveContainer" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.521648 master-0 kubenswrapper[33141]: E0308 03:39:14.521590 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": container with ID starting with 5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70 not found: ID does not exist" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.521755 master-0 kubenswrapper[33141]: I0308 03:39:14.521672 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70"} err="failed to get container status \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": rpc error: code = NotFound desc = could not find container \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": container with ID starting with 5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70 not found: ID does not exist" Mar 08 03:39:14.521755 master-0 kubenswrapper[33141]: I0308 03:39:14.521715 33141 scope.go:117] "RemoveContainer" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.522047 master-0 kubenswrapper[33141]: I0308 03:39:14.522010 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab"} err="failed to get container status \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": rpc error: code = NotFound desc = could not find container \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": container with ID starting with 332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab not found: ID does not exist" Mar 08 03:39:14.522047 master-0 kubenswrapper[33141]: I0308 03:39:14.522030 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.522376 master-0 kubenswrapper[33141]: I0308 03:39:14.522334 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} err="failed to get container status \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": rpc error: code = NotFound desc = could not find container \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": container with ID starting with 7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9 not found: ID does not exist" Mar 08 03:39:14.522456 master-0 kubenswrapper[33141]: I0308 03:39:14.522378 33141 scope.go:117] "RemoveContainer" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.522832 master-0 kubenswrapper[33141]: I0308 03:39:14.522628 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07"} err="failed to get container status \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": rpc error: code = NotFound desc = could not find container \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": container with ID starting with 8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07 not found: ID does not exist" Mar 08 03:39:14.522832 master-0 kubenswrapper[33141]: I0308 03:39:14.522648 33141 scope.go:117] "RemoveContainer" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.523224 master-0 kubenswrapper[33141]: I0308 03:39:14.522883 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5"} err="failed to get container status \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": rpc error: code = NotFound desc = could not find container \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": container with ID starting with b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5 not found: ID does not exist" Mar 08 03:39:14.523224 master-0 kubenswrapper[33141]: I0308 03:39:14.522938 33141 scope.go:117] "RemoveContainer" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.523224 master-0 kubenswrapper[33141]: I0308 03:39:14.523211 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70"} err="failed to get container status \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": rpc error: code = NotFound desc = could not find container \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": container with ID starting with 5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70 not found: ID does not exist" Mar 08 03:39:14.523224 master-0 kubenswrapper[33141]: I0308 03:39:14.523224 33141 scope.go:117] "RemoveContainer" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.523594 master-0 kubenswrapper[33141]: I0308 03:39:14.523510 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab"} err="failed to get container status \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": rpc error: code = NotFound desc = could not find container \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": container with ID starting with 332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab not found: ID does not exist" Mar 08 03:39:14.523670 master-0 kubenswrapper[33141]: I0308 03:39:14.523591 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.523987 master-0 kubenswrapper[33141]: I0308 03:39:14.523964 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} err="failed to get container status \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": rpc error: code = NotFound desc = could not find container \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": container with ID starting with 7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9 not found: ID does not exist" Mar 08 03:39:14.523987 master-0 kubenswrapper[33141]: I0308 03:39:14.523983 33141 scope.go:117] "RemoveContainer" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.524394 master-0 kubenswrapper[33141]: I0308 03:39:14.524366 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07"} err="failed to get container status \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": rpc error: code = NotFound desc = could not find container \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": container with ID starting with 8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07 not found: ID does not exist" Mar 08 03:39:14.524605 master-0 kubenswrapper[33141]: I0308 03:39:14.524388 33141 scope.go:117] "RemoveContainer" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.524737 master-0 kubenswrapper[33141]: I0308 03:39:14.524694 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5"} err="failed to get container status \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": rpc error: code = NotFound desc = could not find container \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": container with ID starting with b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5 not found: ID does not exist" Mar 08 03:39:14.524737 master-0 kubenswrapper[33141]: I0308 03:39:14.524740 33141 scope.go:117] "RemoveContainer" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.525043 master-0 kubenswrapper[33141]: I0308 03:39:14.525020 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70"} err="failed to get container status \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": rpc error: code = NotFound desc = could not find container \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": container with ID starting with 5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70 not found: ID does not exist" Mar 08 03:39:14.525043 master-0 kubenswrapper[33141]: I0308 03:39:14.525039 33141 scope.go:117] "RemoveContainer" containerID="332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab" Mar 08 03:39:14.525343 master-0 kubenswrapper[33141]: I0308 03:39:14.525314 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab"} err="failed to get container status \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": rpc error: code = NotFound desc = could not find container \"332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab\": container with ID starting with 332306d7dbf184d29436906eb10b5bb337b53506d14db47b82dfda5a230a98ab not found: ID does not exist" Mar 08 03:39:14.525343 master-0 kubenswrapper[33141]: I0308 03:39:14.525332 33141 scope.go:117] "RemoveContainer" containerID="7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9" Mar 08 03:39:14.525739 master-0 kubenswrapper[33141]: I0308 03:39:14.525526 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9"} err="failed to get container status \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": rpc error: code = NotFound desc = could not find container \"7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9\": container with ID starting with 7b99154d23492ac2827daa714704ed864315e1a09f912ae7cbddd060b73a58b9 not found: ID does not exist" Mar 08 03:39:14.525739 master-0 kubenswrapper[33141]: I0308 03:39:14.525733 33141 scope.go:117] "RemoveContainer" containerID="8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07" Mar 08 03:39:14.525990 master-0 kubenswrapper[33141]: I0308 03:39:14.525965 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07"} err="failed to get container status \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": rpc error: code = NotFound desc = could not find container \"8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07\": container with ID starting with 8b4b4a41c43b420ee4ea31ca92d2281f0fa33c8432218cb3a27c79ded9a4be07 not found: ID does not exist" Mar 08 03:39:14.525990 master-0 kubenswrapper[33141]: I0308 03:39:14.525983 33141 scope.go:117] "RemoveContainer" containerID="b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5" Mar 08 03:39:14.526322 master-0 kubenswrapper[33141]: I0308 03:39:14.526265 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5"} err="failed to get container status \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": rpc error: code = NotFound desc = could not find container \"b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5\": container with ID starting with b1cdd2847bce05ce606464fc2197040b8c7c46b24a6a67a1f9ecd7a102c587b5 not found: ID does not exist" Mar 08 03:39:14.526567 master-0 kubenswrapper[33141]: I0308 03:39:14.526328 33141 scope.go:117] "RemoveContainer" containerID="5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70" Mar 08 03:39:14.526954 master-0 kubenswrapper[33141]: I0308 03:39:14.526929 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70"} err="failed to get container status \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": rpc error: code = NotFound desc = could not find container \"5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70\": container with ID starting with 5c74eced7c4b444acd9ae51002ad2d9219a7b997d872522d969a5f5b2edbab70 not found: ID does not exist" Mar 08 03:39:15.801134 master-0 kubenswrapper[33141]: I0308 03:39:15.801083 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:39:16.001085 master-0 kubenswrapper[33141]: I0308 03:39:16.001034 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access\") pod \"4789137f-dcfe-4afa-9f1e-91546be2c979\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " Mar 08 03:39:16.001291 master-0 kubenswrapper[33141]: I0308 03:39:16.001133 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir\") pod \"4789137f-dcfe-4afa-9f1e-91546be2c979\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " Mar 08 03:39:16.001291 master-0 kubenswrapper[33141]: I0308 03:39:16.001272 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock\") pod \"4789137f-dcfe-4afa-9f1e-91546be2c979\" (UID: \"4789137f-dcfe-4afa-9f1e-91546be2c979\") " Mar 08 03:39:16.001417 master-0 kubenswrapper[33141]: I0308 03:39:16.001395 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4789137f-dcfe-4afa-9f1e-91546be2c979" (UID: "4789137f-dcfe-4afa-9f1e-91546be2c979"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:39:16.001519 master-0 kubenswrapper[33141]: I0308 03:39:16.001496 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock" (OuterVolumeSpecName: "var-lock") pod "4789137f-dcfe-4afa-9f1e-91546be2c979" (UID: "4789137f-dcfe-4afa-9f1e-91546be2c979"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 03:39:16.001681 master-0 kubenswrapper[33141]: I0308 03:39:16.001665 33141 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:16.001755 master-0 kubenswrapper[33141]: I0308 03:39:16.001742 33141 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4789137f-dcfe-4afa-9f1e-91546be2c979-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:16.004864 master-0 kubenswrapper[33141]: I0308 03:39:16.004797 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4789137f-dcfe-4afa-9f1e-91546be2c979" (UID: "4789137f-dcfe-4afa-9f1e-91546be2c979"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:39:16.103067 master-0 kubenswrapper[33141]: I0308 03:39:16.102999 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4789137f-dcfe-4afa-9f1e-91546be2c979-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:16.454018 master-0 kubenswrapper[33141]: I0308 03:39:16.453948 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"4789137f-dcfe-4afa-9f1e-91546be2c979","Type":"ContainerDied","Data":"20eac49adc3fdfba262c6d581be3c93425c587ca9c06252c7121a77933a0d776"} Mar 08 03:39:16.454018 master-0 kubenswrapper[33141]: I0308 03:39:16.454000 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20eac49adc3fdfba262c6d581be3c93425c587ca9c06252c7121a77933a0d776" Mar 08 03:39:16.454491 master-0 kubenswrapper[33141]: I0308 03:39:16.454450 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 08 03:39:28.350214 master-0 kubenswrapper[33141]: I0308 03:39:28.350102 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:28.378510 master-0 kubenswrapper[33141]: I0308 03:39:28.378445 33141 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="89ed0b51-b137-4c31-b33c-94fe2f4cd1b8" Mar 08 03:39:28.378510 master-0 kubenswrapper[33141]: I0308 03:39:28.378504 33141 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="89ed0b51-b137-4c31-b33c-94fe2f4cd1b8" Mar 08 03:39:28.430255 master-0 kubenswrapper[33141]: I0308 03:39:28.429354 33141 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:28.451425 master-0 kubenswrapper[33141]: I0308 03:39:28.451354 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:39:28.460979 master-0 kubenswrapper[33141]: I0308 03:39:28.460188 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:28.467938 master-0 kubenswrapper[33141]: I0308 03:39:28.466741 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:39:28.473188 master-0 kubenswrapper[33141]: I0308 03:39:28.472821 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 08 03:39:28.505147 master-0 kubenswrapper[33141]: W0308 03:39:28.505089 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod021a99d52e4f3f6d8ed4d016669c0eb8.slice/crio-7ff4ffdf29bf3f805e93c2ca2cb377fc898789039300bf7e1267490dc5972a74 WatchSource:0}: Error finding container 7ff4ffdf29bf3f805e93c2ca2cb377fc898789039300bf7e1267490dc5972a74: Status 404 returned error can't find the container with id 7ff4ffdf29bf3f805e93c2ca2cb377fc898789039300bf7e1267490dc5972a74 Mar 08 03:39:28.591300 master-0 kubenswrapper[33141]: I0308 03:39:28.591232 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"021a99d52e4f3f6d8ed4d016669c0eb8","Type":"ContainerStarted","Data":"7ff4ffdf29bf3f805e93c2ca2cb377fc898789039300bf7e1267490dc5972a74"} Mar 08 03:39:29.606163 master-0 kubenswrapper[33141]: I0308 03:39:29.606095 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"021a99d52e4f3f6d8ed4d016669c0eb8","Type":"ContainerStarted","Data":"37c42dca5bfb29affaa9eb20bb666d94485e0c1bf13e7aa2ec14c1e448a7b2ca"} Mar 08 03:39:29.607101 master-0 kubenswrapper[33141]: I0308 03:39:29.607028 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"021a99d52e4f3f6d8ed4d016669c0eb8","Type":"ContainerStarted","Data":"31f02206fcec8a2ddd5d9c0af45ae725f7e3bbd8c58bf8eb3ef4a542984a7477"} Mar 08 03:39:29.607101 master-0 kubenswrapper[33141]: I0308 03:39:29.607059 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"021a99d52e4f3f6d8ed4d016669c0eb8","Type":"ContainerStarted","Data":"9942ba32b26eee9487315507fd6488705aab585fbf8e854bd5327686b5e0f08b"} Mar 08 03:39:29.607101 master-0 kubenswrapper[33141]: I0308 03:39:29.607075 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"021a99d52e4f3f6d8ed4d016669c0eb8","Type":"ContainerStarted","Data":"58c57cc5ace1aef95838dc7d81b78af09f5f5c25f023fa2fee5ba1cd7cd6fa40"} Mar 08 03:39:29.642952 master-0 kubenswrapper[33141]: I0308 03:39:29.642835 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=1.642812691 podStartE2EDuration="1.642812691s" podCreationTimestamp="2026-03-08 03:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:39:29.634238667 +0000 UTC m=+483.504131930" watchObservedRunningTime="2026-03-08 03:39:29.642812691 +0000 UTC m=+483.512705884" Mar 08 03:39:38.461633 master-0 kubenswrapper[33141]: I0308 03:39:38.461555 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.461633 master-0 kubenswrapper[33141]: I0308 03:39:38.461629 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.461633 master-0 kubenswrapper[33141]: I0308 03:39:38.461652 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.463021 master-0 kubenswrapper[33141]: I0308 03:39:38.462957 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.471658 master-0 kubenswrapper[33141]: I0308 03:39:38.471610 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.472281 master-0 kubenswrapper[33141]: I0308 03:39:38.472217 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.552119 master-0 kubenswrapper[33141]: I0308 03:39:38.551968 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79dfbb5ff-xk648" podUID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" containerName="console" containerID="cri-o://26ed18f456a0d83cb1b9c08e66787611a5b5be658aab613da6c7f0b5d2083b8d" gracePeriod=15 Mar 08 03:39:38.689891 master-0 kubenswrapper[33141]: I0308 03:39:38.689837 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dfbb5ff-xk648_6a4b6519-6725-4fc3-bb3b-f5e6e13a6592/console/0.log" Mar 08 03:39:38.690089 master-0 kubenswrapper[33141]: I0308 03:39:38.689917 33141 generic.go:334] "Generic (PLEG): container finished" podID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" containerID="26ed18f456a0d83cb1b9c08e66787611a5b5be658aab613da6c7f0b5d2083b8d" exitCode=2 Mar 08 03:39:38.690145 master-0 kubenswrapper[33141]: I0308 03:39:38.690072 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dfbb5ff-xk648" event={"ID":"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592","Type":"ContainerDied","Data":"26ed18f456a0d83cb1b9c08e66787611a5b5be658aab613da6c7f0b5d2083b8d"} Mar 08 03:39:38.695302 master-0 kubenswrapper[33141]: I0308 03:39:38.695253 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:38.695713 master-0 kubenswrapper[33141]: I0308 03:39:38.695658 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 08 03:39:39.031234 master-0 kubenswrapper[33141]: I0308 03:39:39.031120 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dfbb5ff-xk648_6a4b6519-6725-4fc3-bb3b-f5e6e13a6592/console/0.log" Mar 08 03:39:39.031234 master-0 kubenswrapper[33141]: I0308 03:39:39.031208 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:39:39.186550 master-0 kubenswrapper[33141]: I0308 03:39:39.186462 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.186824 master-0 kubenswrapper[33141]: I0308 03:39:39.186564 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.186824 master-0 kubenswrapper[33141]: I0308 03:39:39.186670 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.186824 master-0 kubenswrapper[33141]: I0308 03:39:39.186749 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.186824 master-0 kubenswrapper[33141]: I0308 03:39:39.186804 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.187159 master-0 kubenswrapper[33141]: I0308 03:39:39.186881 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfn7t\" (UniqueName: \"kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.187159 master-0 kubenswrapper[33141]: I0308 03:39:39.186948 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca\") pod \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\" (UID: \"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592\") " Mar 08 03:39:39.187512 master-0 kubenswrapper[33141]: I0308 03:39:39.187460 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config" (OuterVolumeSpecName: "console-config") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:39:39.188372 master-0 kubenswrapper[33141]: I0308 03:39:39.188286 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:39:39.188372 master-0 kubenswrapper[33141]: I0308 03:39:39.188120 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:39:39.188579 master-0 kubenswrapper[33141]: I0308 03:39:39.188369 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:39:39.190675 master-0 kubenswrapper[33141]: I0308 03:39:39.190620 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:39:39.190797 master-0 kubenswrapper[33141]: I0308 03:39:39.190659 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:39:39.191545 master-0 kubenswrapper[33141]: I0308 03:39:39.191460 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t" (OuterVolumeSpecName: "kube-api-access-vfn7t") pod "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" (UID: "6a4b6519-6725-4fc3-bb3b-f5e6e13a6592"). InnerVolumeSpecName "kube-api-access-vfn7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289474 33141 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289531 33141 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289550 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfn7t\" (UniqueName: \"kubernetes.io/projected/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-kube-api-access-vfn7t\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289566 33141 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289582 33141 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289594 33141 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.289627 master-0 kubenswrapper[33141]: I0308 03:39:39.289606 33141 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:39.701958 master-0 kubenswrapper[33141]: I0308 03:39:39.701588 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79dfbb5ff-xk648_6a4b6519-6725-4fc3-bb3b-f5e6e13a6592/console/0.log" Mar 08 03:39:39.701958 master-0 kubenswrapper[33141]: I0308 03:39:39.701724 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79dfbb5ff-xk648" event={"ID":"6a4b6519-6725-4fc3-bb3b-f5e6e13a6592","Type":"ContainerDied","Data":"dd5910d651e29cc9761f45dcbcef8b34e2ab51d60d8e4620e3a32e9f78ab8459"} Mar 08 03:39:39.701958 master-0 kubenswrapper[33141]: I0308 03:39:39.701754 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79dfbb5ff-xk648" Mar 08 03:39:39.701958 master-0 kubenswrapper[33141]: I0308 03:39:39.701779 33141 scope.go:117] "RemoveContainer" containerID="26ed18f456a0d83cb1b9c08e66787611a5b5be658aab613da6c7f0b5d2083b8d" Mar 08 03:39:39.760540 master-0 kubenswrapper[33141]: I0308 03:39:39.760435 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:39:39.773081 master-0 kubenswrapper[33141]: I0308 03:39:39.773000 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79dfbb5ff-xk648"] Mar 08 03:39:40.361527 master-0 kubenswrapper[33141]: I0308 03:39:40.361444 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" path="/var/lib/kubelet/pods/6a4b6519-6725-4fc3-bb3b-f5e6e13a6592/volumes" Mar 08 03:39:51.922475 master-0 kubenswrapper[33141]: I0308 03:39:51.922265 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv"] Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: E0308 03:39:51.922850 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4789137f-dcfe-4afa-9f1e-91546be2c979" containerName="installer" Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: I0308 03:39:51.922887 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4789137f-dcfe-4afa-9f1e-91546be2c979" containerName="installer" Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: E0308 03:39:51.922965 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" containerName="console" Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: I0308 03:39:51.922987 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" containerName="console" Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: I0308 03:39:51.923337 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="4789137f-dcfe-4afa-9f1e-91546be2c979" containerName="installer" Mar 08 03:39:51.924036 master-0 kubenswrapper[33141]: I0308 03:39:51.923386 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4b6519-6725-4fc3-bb3b-f5e6e13a6592" containerName="console" Mar 08 03:39:51.925819 master-0 kubenswrapper[33141]: I0308 03:39:51.925744 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:51.928280 master-0 kubenswrapper[33141]: I0308 03:39:51.928230 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-gxv5n" Mar 08 03:39:51.932596 master-0 kubenswrapper[33141]: I0308 03:39:51.932509 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv"] Mar 08 03:39:52.009611 master-0 kubenswrapper[33141]: I0308 03:39:52.009512 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.009611 master-0 kubenswrapper[33141]: I0308 03:39:52.009589 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txd85\" (UniqueName: \"kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.009611 master-0 kubenswrapper[33141]: I0308 03:39:52.009656 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.111515 master-0 kubenswrapper[33141]: I0308 03:39:52.111431 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.111759 master-0 kubenswrapper[33141]: I0308 03:39:52.111550 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.111759 master-0 kubenswrapper[33141]: I0308 03:39:52.111583 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txd85\" (UniqueName: \"kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.112140 master-0 kubenswrapper[33141]: I0308 03:39:52.112084 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.113054 master-0 kubenswrapper[33141]: I0308 03:39:52.112508 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.133726 master-0 kubenswrapper[33141]: I0308 03:39:52.133639 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txd85\" (UniqueName: \"kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.276284 master-0 kubenswrapper[33141]: I0308 03:39:52.276194 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:52.790174 master-0 kubenswrapper[33141]: W0308 03:39:52.790098 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b0695c_0239_4027_9d8d_e447e733a424.slice/crio-d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc WatchSource:0}: Error finding container d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc: Status 404 returned error can't find the container with id d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc Mar 08 03:39:52.800662 master-0 kubenswrapper[33141]: I0308 03:39:52.799190 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv"] Mar 08 03:39:52.815575 master-0 kubenswrapper[33141]: I0308 03:39:52.815510 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" event={"ID":"a5b0695c-0239-4027-9d8d-e447e733a424","Type":"ContainerStarted","Data":"d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc"} Mar 08 03:39:53.827063 master-0 kubenswrapper[33141]: I0308 03:39:53.826958 33141 generic.go:334] "Generic (PLEG): container finished" podID="a5b0695c-0239-4027-9d8d-e447e733a424" containerID="fc6f53888ad2b2c3e1794cea1b4c24d90059cc9ad57e0cd07e6fa339d27d1d0f" exitCode=0 Mar 08 03:39:53.827063 master-0 kubenswrapper[33141]: I0308 03:39:53.827035 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" event={"ID":"a5b0695c-0239-4027-9d8d-e447e733a424","Type":"ContainerDied","Data":"fc6f53888ad2b2c3e1794cea1b4c24d90059cc9ad57e0cd07e6fa339d27d1d0f"} Mar 08 03:39:53.829528 master-0 kubenswrapper[33141]: I0308 03:39:53.829383 33141 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:39:55.842812 master-0 kubenswrapper[33141]: I0308 03:39:55.842676 33141 generic.go:334] "Generic (PLEG): container finished" podID="a5b0695c-0239-4027-9d8d-e447e733a424" containerID="03d11842abc835379df6b0210e47e52e1354f7d0d1632f64d23ae1ca022ffe38" exitCode=0 Mar 08 03:39:55.842812 master-0 kubenswrapper[33141]: I0308 03:39:55.842796 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" event={"ID":"a5b0695c-0239-4027-9d8d-e447e733a424","Type":"ContainerDied","Data":"03d11842abc835379df6b0210e47e52e1354f7d0d1632f64d23ae1ca022ffe38"} Mar 08 03:39:56.858121 master-0 kubenswrapper[33141]: I0308 03:39:56.858001 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" event={"ID":"a5b0695c-0239-4027-9d8d-e447e733a424","Type":"ContainerDied","Data":"7711e873595f37115148d8c42f74002ab36a754b386903de9bd689335dd76575"} Mar 08 03:39:56.858121 master-0 kubenswrapper[33141]: I0308 03:39:56.857897 33141 generic.go:334] "Generic (PLEG): container finished" podID="a5b0695c-0239-4027-9d8d-e447e733a424" containerID="7711e873595f37115148d8c42f74002ab36a754b386903de9bd689335dd76575" exitCode=0 Mar 08 03:39:58.196841 master-0 kubenswrapper[33141]: I0308 03:39:58.196796 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:39:58.322983 master-0 kubenswrapper[33141]: I0308 03:39:58.322884 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle\") pod \"a5b0695c-0239-4027-9d8d-e447e733a424\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " Mar 08 03:39:58.323298 master-0 kubenswrapper[33141]: I0308 03:39:58.323276 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txd85\" (UniqueName: \"kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85\") pod \"a5b0695c-0239-4027-9d8d-e447e733a424\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " Mar 08 03:39:58.323635 master-0 kubenswrapper[33141]: I0308 03:39:58.323586 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util\") pod \"a5b0695c-0239-4027-9d8d-e447e733a424\" (UID: \"a5b0695c-0239-4027-9d8d-e447e733a424\") " Mar 08 03:39:58.324190 master-0 kubenswrapper[33141]: I0308 03:39:58.323656 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle" (OuterVolumeSpecName: "bundle") pod "a5b0695c-0239-4027-9d8d-e447e733a424" (UID: "a5b0695c-0239-4027-9d8d-e447e733a424"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:39:58.324472 master-0 kubenswrapper[33141]: I0308 03:39:58.324449 33141 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:58.326896 master-0 kubenswrapper[33141]: I0308 03:39:58.326832 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85" (OuterVolumeSpecName: "kube-api-access-txd85") pod "a5b0695c-0239-4027-9d8d-e447e733a424" (UID: "a5b0695c-0239-4027-9d8d-e447e733a424"). InnerVolumeSpecName "kube-api-access-txd85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:39:58.339452 master-0 kubenswrapper[33141]: I0308 03:39:58.339385 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util" (OuterVolumeSpecName: "util") pod "a5b0695c-0239-4027-9d8d-e447e733a424" (UID: "a5b0695c-0239-4027-9d8d-e447e733a424"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:39:58.426128 master-0 kubenswrapper[33141]: I0308 03:39:58.425971 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txd85\" (UniqueName: \"kubernetes.io/projected/a5b0695c-0239-4027-9d8d-e447e733a424-kube-api-access-txd85\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:58.426128 master-0 kubenswrapper[33141]: I0308 03:39:58.426014 33141 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a5b0695c-0239-4027-9d8d-e447e733a424-util\") on node \"master-0\" DevicePath \"\"" Mar 08 03:39:58.880643 master-0 kubenswrapper[33141]: I0308 03:39:58.880563 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" event={"ID":"a5b0695c-0239-4027-9d8d-e447e733a424","Type":"ContainerDied","Data":"d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc"} Mar 08 03:39:58.880643 master-0 kubenswrapper[33141]: I0308 03:39:58.880639 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d443bd0f64c3f587eb3b23dd873dd7e06fe7cb8784918416a41935a329c763bc" Mar 08 03:39:58.881127 master-0 kubenswrapper[33141]: I0308 03:39:58.880724 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g47xv" Mar 08 03:40:04.566754 master-0 kubenswrapper[33141]: I0308 03:40:04.566689 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-bfb8dcf9c-rfcbz"] Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: E0308 03:40:04.567314 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="pull" Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: I0308 03:40:04.567341 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="pull" Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: E0308 03:40:04.567379 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="util" Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: I0308 03:40:04.567390 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="util" Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: E0308 03:40:04.567435 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="extract" Mar 08 03:40:04.567699 master-0 kubenswrapper[33141]: I0308 03:40:04.567449 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="extract" Mar 08 03:40:04.568175 master-0 kubenswrapper[33141]: I0308 03:40:04.567705 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b0695c-0239-4027-9d8d-e447e733a424" containerName="extract" Mar 08 03:40:04.568584 master-0 kubenswrapper[33141]: I0308 03:40:04.568545 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.572137 master-0 kubenswrapper[33141]: I0308 03:40:04.572062 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 08 03:40:04.572614 master-0 kubenswrapper[33141]: I0308 03:40:04.572591 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 08 03:40:04.572985 master-0 kubenswrapper[33141]: I0308 03:40:04.572945 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 08 03:40:04.578594 master-0 kubenswrapper[33141]: I0308 03:40:04.578555 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 08 03:40:04.579209 master-0 kubenswrapper[33141]: I0308 03:40:04.578978 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 08 03:40:04.595731 master-0 kubenswrapper[33141]: I0308 03:40:04.595660 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-bfb8dcf9c-rfcbz"] Mar 08 03:40:04.726777 master-0 kubenswrapper[33141]: I0308 03:40:04.726702 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-webhook-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.727043 master-0 kubenswrapper[33141]: I0308 03:40:04.726991 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvp7m\" (UniqueName: \"kubernetes.io/projected/45e80f38-1789-4edc-8090-6bd26e1441bd-kube-api-access-lvp7m\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.727174 master-0 kubenswrapper[33141]: I0308 03:40:04.727117 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-apiservice-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.727240 master-0 kubenswrapper[33141]: I0308 03:40:04.727194 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/45e80f38-1789-4edc-8090-6bd26e1441bd-socket-dir\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.727287 master-0 kubenswrapper[33141]: I0308 03:40:04.727248 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-metrics-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.828498 master-0 kubenswrapper[33141]: I0308 03:40:04.828398 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-webhook-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.828746 master-0 kubenswrapper[33141]: I0308 03:40:04.828730 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvp7m\" (UniqueName: \"kubernetes.io/projected/45e80f38-1789-4edc-8090-6bd26e1441bd-kube-api-access-lvp7m\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.828854 master-0 kubenswrapper[33141]: I0308 03:40:04.828841 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-apiservice-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.828966 master-0 kubenswrapper[33141]: I0308 03:40:04.828951 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/45e80f38-1789-4edc-8090-6bd26e1441bd-socket-dir\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.829065 master-0 kubenswrapper[33141]: I0308 03:40:04.829050 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-metrics-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.829614 master-0 kubenswrapper[33141]: I0308 03:40:04.829406 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/45e80f38-1789-4edc-8090-6bd26e1441bd-socket-dir\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.833301 master-0 kubenswrapper[33141]: I0308 03:40:04.833255 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-metrics-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.833392 master-0 kubenswrapper[33141]: I0308 03:40:04.833294 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-apiservice-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.836921 master-0 kubenswrapper[33141]: I0308 03:40:04.834317 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45e80f38-1789-4edc-8090-6bd26e1441bd-webhook-cert\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.847764 master-0 kubenswrapper[33141]: I0308 03:40:04.847678 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvp7m\" (UniqueName: \"kubernetes.io/projected/45e80f38-1789-4edc-8090-6bd26e1441bd-kube-api-access-lvp7m\") pod \"lvms-operator-bfb8dcf9c-rfcbz\" (UID: \"45e80f38-1789-4edc-8090-6bd26e1441bd\") " pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:04.887350 master-0 kubenswrapper[33141]: I0308 03:40:04.887264 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:05.354595 master-0 kubenswrapper[33141]: I0308 03:40:05.354537 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-bfb8dcf9c-rfcbz"] Mar 08 03:40:05.357275 master-0 kubenswrapper[33141]: W0308 03:40:05.357232 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45e80f38_1789_4edc_8090_6bd26e1441bd.slice/crio-a8ebd89b6787f2504de852f231db3b13c64d5b9cda1b52e3c55fe6ca185d72b7 WatchSource:0}: Error finding container a8ebd89b6787f2504de852f231db3b13c64d5b9cda1b52e3c55fe6ca185d72b7: Status 404 returned error can't find the container with id a8ebd89b6787f2504de852f231db3b13c64d5b9cda1b52e3c55fe6ca185d72b7 Mar 08 03:40:05.973887 master-0 kubenswrapper[33141]: I0308 03:40:05.973774 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" event={"ID":"45e80f38-1789-4edc-8090-6bd26e1441bd","Type":"ContainerStarted","Data":"a8ebd89b6787f2504de852f231db3b13c64d5b9cda1b52e3c55fe6ca185d72b7"} Mar 08 03:40:11.015242 master-0 kubenswrapper[33141]: I0308 03:40:11.015170 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" event={"ID":"45e80f38-1789-4edc-8090-6bd26e1441bd","Type":"ContainerStarted","Data":"39c8c4c9d8d9bb4e0864e3513b20463495ff2d4b1b2951b0f13d85d4651d5765"} Mar 08 03:40:11.015780 master-0 kubenswrapper[33141]: I0308 03:40:11.015612 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:11.021972 master-0 kubenswrapper[33141]: I0308 03:40:11.021896 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" Mar 08 03:40:11.055993 master-0 kubenswrapper[33141]: I0308 03:40:11.055766 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-bfb8dcf9c-rfcbz" podStartSLOduration=2.344644819 podStartE2EDuration="7.055736813s" podCreationTimestamp="2026-03-08 03:40:04 +0000 UTC" firstStartedPulling="2026-03-08 03:40:05.360371131 +0000 UTC m=+519.230264344" lastFinishedPulling="2026-03-08 03:40:10.071463145 +0000 UTC m=+523.941356338" observedRunningTime="2026-03-08 03:40:11.045485626 +0000 UTC m=+524.915378849" watchObservedRunningTime="2026-03-08 03:40:11.055736813 +0000 UTC m=+524.925630016" Mar 08 03:40:14.887670 master-0 kubenswrapper[33141]: I0308 03:40:14.886446 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb"] Mar 08 03:40:14.889619 master-0 kubenswrapper[33141]: I0308 03:40:14.889565 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:14.896448 master-0 kubenswrapper[33141]: I0308 03:40:14.896381 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-gxv5n" Mar 08 03:40:14.908031 master-0 kubenswrapper[33141]: I0308 03:40:14.907975 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb"] Mar 08 03:40:15.002491 master-0 kubenswrapper[33141]: I0308 03:40:15.002190 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.003105 master-0 kubenswrapper[33141]: I0308 03:40:15.003076 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.003309 master-0 kubenswrapper[33141]: I0308 03:40:15.003286 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgh7\" (UniqueName: \"kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.104881 master-0 kubenswrapper[33141]: I0308 03:40:15.104801 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.104881 master-0 kubenswrapper[33141]: I0308 03:40:15.104939 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.105509 master-0 kubenswrapper[33141]: I0308 03:40:15.105110 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fgh7\" (UniqueName: \"kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.105509 master-0 kubenswrapper[33141]: I0308 03:40:15.105308 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.105969 master-0 kubenswrapper[33141]: I0308 03:40:15.105821 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.134700 master-0 kubenswrapper[33141]: I0308 03:40:15.134614 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fgh7\" (UniqueName: \"kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.221300 master-0 kubenswrapper[33141]: I0308 03:40:15.221101 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:15.690259 master-0 kubenswrapper[33141]: I0308 03:40:15.690207 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb"] Mar 08 03:40:15.693620 master-0 kubenswrapper[33141]: W0308 03:40:15.693585 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4575461d_87d0_48f4_a495_17b6da70b2b8.slice/crio-6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc WatchSource:0}: Error finding container 6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc: Status 404 returned error can't find the container with id 6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc Mar 08 03:40:16.058668 master-0 kubenswrapper[33141]: I0308 03:40:16.058580 33141 generic.go:334] "Generic (PLEG): container finished" podID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerID="3c20f670f89747fcf751c61497693bfa1b19f6a2655bb679ec70c9fe46851e2c" exitCode=0 Mar 08 03:40:16.059248 master-0 kubenswrapper[33141]: I0308 03:40:16.058640 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerDied","Data":"3c20f670f89747fcf751c61497693bfa1b19f6a2655bb679ec70c9fe46851e2c"} Mar 08 03:40:16.059248 master-0 kubenswrapper[33141]: I0308 03:40:16.058719 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerStarted","Data":"6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc"} Mar 08 03:40:17.287067 master-0 kubenswrapper[33141]: I0308 03:40:17.286974 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz"] Mar 08 03:40:17.288970 master-0 kubenswrapper[33141]: I0308 03:40:17.288927 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.302564 master-0 kubenswrapper[33141]: I0308 03:40:17.302486 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz"] Mar 08 03:40:17.440755 master-0 kubenswrapper[33141]: I0308 03:40:17.440665 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.441030 master-0 kubenswrapper[33141]: I0308 03:40:17.440859 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.441200 master-0 kubenswrapper[33141]: I0308 03:40:17.441149 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gvp\" (UniqueName: \"kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.543728 master-0 kubenswrapper[33141]: I0308 03:40:17.543533 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.543728 master-0 kubenswrapper[33141]: I0308 03:40:17.543645 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.543728 master-0 kubenswrapper[33141]: I0308 03:40:17.543713 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7gvp\" (UniqueName: \"kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.544331 master-0 kubenswrapper[33141]: I0308 03:40:17.544247 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.544414 master-0 kubenswrapper[33141]: I0308 03:40:17.544316 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.564611 master-0 kubenswrapper[33141]: I0308 03:40:17.564233 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7gvp\" (UniqueName: \"kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:17.618680 master-0 kubenswrapper[33141]: I0308 03:40:17.618590 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:18.103556 master-0 kubenswrapper[33141]: I0308 03:40:18.103491 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs"] Mar 08 03:40:18.108107 master-0 kubenswrapper[33141]: I0308 03:40:18.107990 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.113805 master-0 kubenswrapper[33141]: I0308 03:40:18.113764 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz"] Mar 08 03:40:18.121438 master-0 kubenswrapper[33141]: I0308 03:40:18.119317 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs"] Mar 08 03:40:18.141957 master-0 kubenswrapper[33141]: W0308 03:40:18.140765 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc3ebc9e_43a7_4c3f_a27d_77d317c8102f.slice/crio-5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd WatchSource:0}: Error finding container 5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd: Status 404 returned error can't find the container with id 5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd Mar 08 03:40:18.255512 master-0 kubenswrapper[33141]: I0308 03:40:18.255443 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.255512 master-0 kubenswrapper[33141]: I0308 03:40:18.255512 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.255766 master-0 kubenswrapper[33141]: I0308 03:40:18.255568 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6psg\" (UniqueName: \"kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.357242 master-0 kubenswrapper[33141]: I0308 03:40:18.356733 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.357242 master-0 kubenswrapper[33141]: I0308 03:40:18.356807 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.357242 master-0 kubenswrapper[33141]: I0308 03:40:18.356864 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6psg\" (UniqueName: \"kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.357919 master-0 kubenswrapper[33141]: I0308 03:40:18.357408 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.357919 master-0 kubenswrapper[33141]: I0308 03:40:18.357664 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.378947 master-0 kubenswrapper[33141]: I0308 03:40:18.378875 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6psg\" (UniqueName: \"kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:18.443252 master-0 kubenswrapper[33141]: I0308 03:40:18.443113 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:19.091657 master-0 kubenswrapper[33141]: I0308 03:40:19.091583 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs"] Mar 08 03:40:19.094642 master-0 kubenswrapper[33141]: I0308 03:40:19.094553 33141 generic.go:334] "Generic (PLEG): container finished" podID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerID="084e7ddd0d8fe5866c65ea8c2d397a4d5073c3dd2320210d1357aa499520f00f" exitCode=0 Mar 08 03:40:19.094642 master-0 kubenswrapper[33141]: I0308 03:40:19.094592 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" event={"ID":"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f","Type":"ContainerDied","Data":"084e7ddd0d8fe5866c65ea8c2d397a4d5073c3dd2320210d1357aa499520f00f"} Mar 08 03:40:19.094642 master-0 kubenswrapper[33141]: I0308 03:40:19.094620 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" event={"ID":"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f","Type":"ContainerStarted","Data":"5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd"} Mar 08 03:40:19.785694 master-0 kubenswrapper[33141]: W0308 03:40:19.785605 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e80de77_15f9_4287_a536_3437324e5ac9.slice/crio-15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592 WatchSource:0}: Error finding container 15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592: Status 404 returned error can't find the container with id 15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592 Mar 08 03:40:20.104038 master-0 kubenswrapper[33141]: I0308 03:40:20.103979 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerStarted","Data":"bec18237322b7919f007416b3bea4686008a2fa82777a51c0516e7c6e7bdb20f"} Mar 08 03:40:20.105603 master-0 kubenswrapper[33141]: I0308 03:40:20.105560 33141 generic.go:334] "Generic (PLEG): container finished" podID="5e80de77-15f9-4287-a536-3437324e5ac9" containerID="857e12a684f4dcd75196edb47954c0d027a4e42325a4de90bd7cbb50ce64910d" exitCode=0 Mar 08 03:40:20.105603 master-0 kubenswrapper[33141]: I0308 03:40:20.105600 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" event={"ID":"5e80de77-15f9-4287-a536-3437324e5ac9","Type":"ContainerDied","Data":"857e12a684f4dcd75196edb47954c0d027a4e42325a4de90bd7cbb50ce64910d"} Mar 08 03:40:20.105603 master-0 kubenswrapper[33141]: I0308 03:40:20.105625 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" event={"ID":"5e80de77-15f9-4287-a536-3437324e5ac9","Type":"ContainerStarted","Data":"15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592"} Mar 08 03:40:21.118998 master-0 kubenswrapper[33141]: I0308 03:40:21.118873 33141 generic.go:334] "Generic (PLEG): container finished" podID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerID="b0b03c81bf10de660f796505694961ddd2153cf0730aa6da59d8262b4f714c29" exitCode=0 Mar 08 03:40:21.118998 master-0 kubenswrapper[33141]: I0308 03:40:21.118972 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" event={"ID":"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f","Type":"ContainerDied","Data":"b0b03c81bf10de660f796505694961ddd2153cf0730aa6da59d8262b4f714c29"} Mar 08 03:40:21.124765 master-0 kubenswrapper[33141]: I0308 03:40:21.124682 33141 generic.go:334] "Generic (PLEG): container finished" podID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerID="bec18237322b7919f007416b3bea4686008a2fa82777a51c0516e7c6e7bdb20f" exitCode=0 Mar 08 03:40:21.124765 master-0 kubenswrapper[33141]: I0308 03:40:21.124746 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerDied","Data":"bec18237322b7919f007416b3bea4686008a2fa82777a51c0516e7c6e7bdb20f"} Mar 08 03:40:22.137633 master-0 kubenswrapper[33141]: I0308 03:40:22.137565 33141 generic.go:334] "Generic (PLEG): container finished" podID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerID="b82bc2cbe1adc80b7558ec86f821bc2b2d01087d86097aa224dcabbc29a9e54f" exitCode=0 Mar 08 03:40:22.138615 master-0 kubenswrapper[33141]: I0308 03:40:22.137810 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" event={"ID":"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f","Type":"ContainerDied","Data":"b82bc2cbe1adc80b7558ec86f821bc2b2d01087d86097aa224dcabbc29a9e54f"} Mar 08 03:40:22.145541 master-0 kubenswrapper[33141]: I0308 03:40:22.145466 33141 generic.go:334] "Generic (PLEG): container finished" podID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerID="ed1fcd0be37cc5b20f4031b2e851236b5e1ab5f989d7089860318ee08d2e0d4d" exitCode=0 Mar 08 03:40:22.145679 master-0 kubenswrapper[33141]: I0308 03:40:22.145582 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerDied","Data":"ed1fcd0be37cc5b20f4031b2e851236b5e1ab5f989d7089860318ee08d2e0d4d"} Mar 08 03:40:22.149481 master-0 kubenswrapper[33141]: I0308 03:40:22.149378 33141 generic.go:334] "Generic (PLEG): container finished" podID="5e80de77-15f9-4287-a536-3437324e5ac9" containerID="e2c0a7be74e67a6f5ef84a8126a64b43dad64e878d94ad61549704237190e22c" exitCode=0 Mar 08 03:40:22.149617 master-0 kubenswrapper[33141]: I0308 03:40:22.149474 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" event={"ID":"5e80de77-15f9-4287-a536-3437324e5ac9","Type":"ContainerDied","Data":"e2c0a7be74e67a6f5ef84a8126a64b43dad64e878d94ad61549704237190e22c"} Mar 08 03:40:23.101504 master-0 kubenswrapper[33141]: I0308 03:40:23.101430 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76"] Mar 08 03:40:23.104094 master-0 kubenswrapper[33141]: I0308 03:40:23.104033 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.124199 master-0 kubenswrapper[33141]: I0308 03:40:23.124155 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76"] Mar 08 03:40:23.179930 master-0 kubenswrapper[33141]: I0308 03:40:23.178851 33141 generic.go:334] "Generic (PLEG): container finished" podID="5e80de77-15f9-4287-a536-3437324e5ac9" containerID="2f1d5aaafd2fd1a4afd49147ed42e4ed050db28743209f8533de9be85aee40aa" exitCode=0 Mar 08 03:40:23.179930 master-0 kubenswrapper[33141]: I0308 03:40:23.179504 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" event={"ID":"5e80de77-15f9-4287-a536-3437324e5ac9","Type":"ContainerDied","Data":"2f1d5aaafd2fd1a4afd49147ed42e4ed050db28743209f8533de9be85aee40aa"} Mar 08 03:40:23.244925 master-0 kubenswrapper[33141]: I0308 03:40:23.243529 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.244925 master-0 kubenswrapper[33141]: I0308 03:40:23.243613 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.244925 master-0 kubenswrapper[33141]: I0308 03:40:23.243639 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvzlb\" (UniqueName: \"kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.345359 master-0 kubenswrapper[33141]: I0308 03:40:23.345300 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.345627 master-0 kubenswrapper[33141]: I0308 03:40:23.345384 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.345627 master-0 kubenswrapper[33141]: I0308 03:40:23.345416 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvzlb\" (UniqueName: \"kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.346506 master-0 kubenswrapper[33141]: I0308 03:40:23.346482 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.346735 master-0 kubenswrapper[33141]: I0308 03:40:23.346706 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.372263 master-0 kubenswrapper[33141]: I0308 03:40:23.372088 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvzlb\" (UniqueName: \"kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.425986 master-0 kubenswrapper[33141]: I0308 03:40:23.425923 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:23.472234 master-0 kubenswrapper[33141]: I0308 03:40:23.469672 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:23.659191 master-0 kubenswrapper[33141]: I0308 03:40:23.659078 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7gvp\" (UniqueName: \"kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp\") pod \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " Mar 08 03:40:23.659191 master-0 kubenswrapper[33141]: I0308 03:40:23.659141 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util\") pod \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " Mar 08 03:40:23.659634 master-0 kubenswrapper[33141]: I0308 03:40:23.659261 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle\") pod \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\" (UID: \"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f\") " Mar 08 03:40:23.660758 master-0 kubenswrapper[33141]: I0308 03:40:23.660716 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle" (OuterVolumeSpecName: "bundle") pod "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" (UID: "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:23.672254 master-0 kubenswrapper[33141]: I0308 03:40:23.672191 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp" (OuterVolumeSpecName: "kube-api-access-l7gvp") pod "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" (UID: "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f"). InnerVolumeSpecName "kube-api-access-l7gvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:40:23.673820 master-0 kubenswrapper[33141]: I0308 03:40:23.673750 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util" (OuterVolumeSpecName: "util") pod "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" (UID: "cc3ebc9e-43a7-4c3f-a27d-77d317c8102f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:23.714441 master-0 kubenswrapper[33141]: I0308 03:40:23.714403 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:23.762315 master-0 kubenswrapper[33141]: I0308 03:40:23.762251 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7gvp\" (UniqueName: \"kubernetes.io/projected/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-kube-api-access-l7gvp\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:23.762315 master-0 kubenswrapper[33141]: I0308 03:40:23.762315 33141 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-util\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:23.762567 master-0 kubenswrapper[33141]: I0308 03:40:23.762339 33141 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc3ebc9e-43a7-4c3f-a27d-77d317c8102f-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:23.863375 master-0 kubenswrapper[33141]: I0308 03:40:23.863288 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fgh7\" (UniqueName: \"kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7\") pod \"4575461d-87d0-48f4-a495-17b6da70b2b8\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " Mar 08 03:40:23.863601 master-0 kubenswrapper[33141]: I0308 03:40:23.863512 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util\") pod \"4575461d-87d0-48f4-a495-17b6da70b2b8\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " Mar 08 03:40:23.863676 master-0 kubenswrapper[33141]: I0308 03:40:23.863653 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle\") pod \"4575461d-87d0-48f4-a495-17b6da70b2b8\" (UID: \"4575461d-87d0-48f4-a495-17b6da70b2b8\") " Mar 08 03:40:23.864815 master-0 kubenswrapper[33141]: I0308 03:40:23.864763 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle" (OuterVolumeSpecName: "bundle") pod "4575461d-87d0-48f4-a495-17b6da70b2b8" (UID: "4575461d-87d0-48f4-a495-17b6da70b2b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:23.868693 master-0 kubenswrapper[33141]: I0308 03:40:23.868653 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7" (OuterVolumeSpecName: "kube-api-access-2fgh7") pod "4575461d-87d0-48f4-a495-17b6da70b2b8" (UID: "4575461d-87d0-48f4-a495-17b6da70b2b8"). InnerVolumeSpecName "kube-api-access-2fgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:40:23.875942 master-0 kubenswrapper[33141]: I0308 03:40:23.875840 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util" (OuterVolumeSpecName: "util") pod "4575461d-87d0-48f4-a495-17b6da70b2b8" (UID: "4575461d-87d0-48f4-a495-17b6da70b2b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:23.928683 master-0 kubenswrapper[33141]: I0308 03:40:23.928602 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76"] Mar 08 03:40:23.940135 master-0 kubenswrapper[33141]: W0308 03:40:23.939780 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e197bb4_4815_4c6d_93cd_cfe2a28160ef.slice/crio-1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c WatchSource:0}: Error finding container 1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c: Status 404 returned error can't find the container with id 1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c Mar 08 03:40:23.965524 master-0 kubenswrapper[33141]: I0308 03:40:23.965447 33141 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:23.965524 master-0 kubenswrapper[33141]: I0308 03:40:23.965494 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fgh7\" (UniqueName: \"kubernetes.io/projected/4575461d-87d0-48f4-a495-17b6da70b2b8-kube-api-access-2fgh7\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:23.965524 master-0 kubenswrapper[33141]: I0308 03:40:23.965507 33141 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4575461d-87d0-48f4-a495-17b6da70b2b8-util\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:24.192283 master-0 kubenswrapper[33141]: I0308 03:40:24.192213 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" event={"ID":"4575461d-87d0-48f4-a495-17b6da70b2b8","Type":"ContainerDied","Data":"6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc"} Mar 08 03:40:24.192283 master-0 kubenswrapper[33141]: I0308 03:40:24.192276 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfcba44fbbdb96ea181d45751bb9256498a4e85cdb7b08e664931dda2cad4bc" Mar 08 03:40:24.193429 master-0 kubenswrapper[33141]: I0308 03:40:24.192232 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55cnxb" Mar 08 03:40:24.194332 master-0 kubenswrapper[33141]: I0308 03:40:24.194264 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerStarted","Data":"ab5356f6dd70b4ee6b79ffcafb3b9128ece1feb6753177e9e52f407941f8b41d"} Mar 08 03:40:24.194332 master-0 kubenswrapper[33141]: I0308 03:40:24.194318 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerStarted","Data":"1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c"} Mar 08 03:40:24.197420 master-0 kubenswrapper[33141]: I0308 03:40:24.197276 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" Mar 08 03:40:24.202399 master-0 kubenswrapper[33141]: I0308 03:40:24.201069 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49hhwz" event={"ID":"cc3ebc9e-43a7-4c3f-a27d-77d317c8102f","Type":"ContainerDied","Data":"5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd"} Mar 08 03:40:24.202399 master-0 kubenswrapper[33141]: I0308 03:40:24.201160 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c1e449d1c2f088dec608fbba10c577bac97e3b15f42d95ba7c853e67575f1fd" Mar 08 03:40:24.660564 master-0 kubenswrapper[33141]: I0308 03:40:24.660494 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:24.775522 master-0 kubenswrapper[33141]: I0308 03:40:24.775422 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6psg\" (UniqueName: \"kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg\") pod \"5e80de77-15f9-4287-a536-3437324e5ac9\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " Mar 08 03:40:24.776237 master-0 kubenswrapper[33141]: I0308 03:40:24.775623 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle\") pod \"5e80de77-15f9-4287-a536-3437324e5ac9\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " Mar 08 03:40:24.776237 master-0 kubenswrapper[33141]: I0308 03:40:24.775682 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util\") pod \"5e80de77-15f9-4287-a536-3437324e5ac9\" (UID: \"5e80de77-15f9-4287-a536-3437324e5ac9\") " Mar 08 03:40:24.776237 master-0 kubenswrapper[33141]: I0308 03:40:24.776107 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle" (OuterVolumeSpecName: "bundle") pod "5e80de77-15f9-4287-a536-3437324e5ac9" (UID: "5e80de77-15f9-4287-a536-3437324e5ac9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:24.779430 master-0 kubenswrapper[33141]: I0308 03:40:24.779386 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg" (OuterVolumeSpecName: "kube-api-access-j6psg") pod "5e80de77-15f9-4287-a536-3437324e5ac9" (UID: "5e80de77-15f9-4287-a536-3437324e5ac9"). InnerVolumeSpecName "kube-api-access-j6psg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:40:24.786479 master-0 kubenswrapper[33141]: I0308 03:40:24.786444 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util" (OuterVolumeSpecName: "util") pod "5e80de77-15f9-4287-a536-3437324e5ac9" (UID: "5e80de77-15f9-4287-a536-3437324e5ac9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:24.877857 master-0 kubenswrapper[33141]: I0308 03:40:24.877748 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6psg\" (UniqueName: \"kubernetes.io/projected/5e80de77-15f9-4287-a536-3437324e5ac9-kube-api-access-j6psg\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:24.877857 master-0 kubenswrapper[33141]: I0308 03:40:24.877783 33141 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:24.877857 master-0 kubenswrapper[33141]: I0308 03:40:24.877793 33141 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5e80de77-15f9-4287-a536-3437324e5ac9-util\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:25.211434 master-0 kubenswrapper[33141]: I0308 03:40:25.210793 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" event={"ID":"5e80de77-15f9-4287-a536-3437324e5ac9","Type":"ContainerDied","Data":"15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592"} Mar 08 03:40:25.211434 master-0 kubenswrapper[33141]: I0308 03:40:25.210868 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a8077323bb379d8a3c68e8ec96982ae835d4697c0aa59704889277a94b6592" Mar 08 03:40:25.211434 master-0 kubenswrapper[33141]: I0308 03:40:25.211006 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82qqpjs" Mar 08 03:40:25.215241 master-0 kubenswrapper[33141]: I0308 03:40:25.215178 33141 generic.go:334] "Generic (PLEG): container finished" podID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerID="ab5356f6dd70b4ee6b79ffcafb3b9128ece1feb6753177e9e52f407941f8b41d" exitCode=0 Mar 08 03:40:25.215448 master-0 kubenswrapper[33141]: I0308 03:40:25.215391 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerDied","Data":"ab5356f6dd70b4ee6b79ffcafb3b9128ece1feb6753177e9e52f407941f8b41d"} Mar 08 03:40:27.231828 master-0 kubenswrapper[33141]: I0308 03:40:27.231775 33141 generic.go:334] "Generic (PLEG): container finished" podID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerID="3e0b4bf61c14aad2098b068e0585c206fb53e0afcd3e4e75e6a470835a4d4efc" exitCode=0 Mar 08 03:40:27.232329 master-0 kubenswrapper[33141]: I0308 03:40:27.232309 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerDied","Data":"3e0b4bf61c14aad2098b068e0585c206fb53e0afcd3e4e75e6a470835a4d4efc"} Mar 08 03:40:28.239644 master-0 kubenswrapper[33141]: I0308 03:40:28.239595 33141 generic.go:334] "Generic (PLEG): container finished" podID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerID="981a6172f3081cebc1b29a57eec24aae26ade3667979f2474549a04d0d10f7c9" exitCode=0 Mar 08 03:40:28.239644 master-0 kubenswrapper[33141]: I0308 03:40:28.239644 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerDied","Data":"981a6172f3081cebc1b29a57eec24aae26ade3667979f2474549a04d0d10f7c9"} Mar 08 03:40:29.329928 master-0 kubenswrapper[33141]: I0308 03:40:29.329833 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll"] Mar 08 03:40:29.330737 master-0 kubenswrapper[33141]: E0308 03:40:29.330207 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="pull" Mar 08 03:40:29.330737 master-0 kubenswrapper[33141]: I0308 03:40:29.330224 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="pull" Mar 08 03:40:29.330737 master-0 kubenswrapper[33141]: E0308 03:40:29.330237 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="util" Mar 08 03:40:29.330737 master-0 kubenswrapper[33141]: I0308 03:40:29.330244 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="util" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.331987 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332011 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332044 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="util" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332052 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="util" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332062 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332070 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332086 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332093 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332105 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="util" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332113 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="util" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332127 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="pull" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332135 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="pull" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: E0308 03:40:29.332147 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="pull" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332155 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="pull" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332503 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e80de77-15f9-4287-a536-3437324e5ac9" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332520 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="4575461d-87d0-48f4-a495-17b6da70b2b8" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.332566 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3ebc9e-43a7-4c3f-a27d-77d317c8102f" containerName="extract" Mar 08 03:40:29.334138 master-0 kubenswrapper[33141]: I0308 03:40:29.333483 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.338897 master-0 kubenswrapper[33141]: I0308 03:40:29.338659 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 08 03:40:29.339027 master-0 kubenswrapper[33141]: I0308 03:40:29.338704 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 08 03:40:29.458991 master-0 kubenswrapper[33141]: I0308 03:40:29.458875 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll"] Mar 08 03:40:29.462991 master-0 kubenswrapper[33141]: I0308 03:40:29.462928 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqp4j\" (UniqueName: \"kubernetes.io/projected/3a477965-934f-49c9-b6c0-dbe7cabd1179-kube-api-access-xqp4j\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.462991 master-0 kubenswrapper[33141]: I0308 03:40:29.462991 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a477965-934f-49c9-b6c0-dbe7cabd1179-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.564972 master-0 kubenswrapper[33141]: I0308 03:40:29.564713 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqp4j\" (UniqueName: \"kubernetes.io/projected/3a477965-934f-49c9-b6c0-dbe7cabd1179-kube-api-access-xqp4j\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.564972 master-0 kubenswrapper[33141]: I0308 03:40:29.564784 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a477965-934f-49c9-b6c0-dbe7cabd1179-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.565374 master-0 kubenswrapper[33141]: I0308 03:40:29.565343 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3a477965-934f-49c9-b6c0-dbe7cabd1179-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.584339 master-0 kubenswrapper[33141]: I0308 03:40:29.584276 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqp4j\" (UniqueName: \"kubernetes.io/projected/3a477965-934f-49c9-b6c0-dbe7cabd1179-kube-api-access-xqp4j\") pod \"cert-manager-operator-controller-manager-66c8bdd694-lgmll\" (UID: \"3a477965-934f-49c9-b6c0-dbe7cabd1179\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.663989 master-0 kubenswrapper[33141]: I0308 03:40:29.662206 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" Mar 08 03:40:29.728398 master-0 kubenswrapper[33141]: I0308 03:40:29.727595 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:29.878064 master-0 kubenswrapper[33141]: I0308 03:40:29.877682 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle\") pod \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " Mar 08 03:40:29.878064 master-0 kubenswrapper[33141]: I0308 03:40:29.877855 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util\") pod \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " Mar 08 03:40:29.878064 master-0 kubenswrapper[33141]: I0308 03:40:29.878011 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvzlb\" (UniqueName: \"kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb\") pod \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\" (UID: \"4e197bb4-4815-4c6d-93cd-cfe2a28160ef\") " Mar 08 03:40:29.881381 master-0 kubenswrapper[33141]: I0308 03:40:29.881334 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle" (OuterVolumeSpecName: "bundle") pod "4e197bb4-4815-4c6d-93cd-cfe2a28160ef" (UID: "4e197bb4-4815-4c6d-93cd-cfe2a28160ef"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:29.889825 master-0 kubenswrapper[33141]: I0308 03:40:29.889625 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util" (OuterVolumeSpecName: "util") pod "4e197bb4-4815-4c6d-93cd-cfe2a28160ef" (UID: "4e197bb4-4815-4c6d-93cd-cfe2a28160ef"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 03:40:29.893244 master-0 kubenswrapper[33141]: I0308 03:40:29.893175 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb" (OuterVolumeSpecName: "kube-api-access-rvzlb") pod "4e197bb4-4815-4c6d-93cd-cfe2a28160ef" (UID: "4e197bb4-4815-4c6d-93cd-cfe2a28160ef"). InnerVolumeSpecName "kube-api-access-rvzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:40:29.979527 master-0 kubenswrapper[33141]: I0308 03:40:29.979479 33141 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-util\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:29.979527 master-0 kubenswrapper[33141]: I0308 03:40:29.979522 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvzlb\" (UniqueName: \"kubernetes.io/projected/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-kube-api-access-rvzlb\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:29.979527 master-0 kubenswrapper[33141]: I0308 03:40:29.979535 33141 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e197bb4-4815-4c6d-93cd-cfe2a28160ef-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:40:30.135563 master-0 kubenswrapper[33141]: I0308 03:40:30.135505 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll"] Mar 08 03:40:30.141041 master-0 kubenswrapper[33141]: W0308 03:40:30.140969 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a477965_934f_49c9_b6c0_dbe7cabd1179.slice/crio-7929b5f9200000bf6e142fd7af676d5f38ab22993a482d467a0effa19aec3b76 WatchSource:0}: Error finding container 7929b5f9200000bf6e142fd7af676d5f38ab22993a482d467a0effa19aec3b76: Status 404 returned error can't find the container with id 7929b5f9200000bf6e142fd7af676d5f38ab22993a482d467a0effa19aec3b76 Mar 08 03:40:30.255721 master-0 kubenswrapper[33141]: I0308 03:40:30.255491 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" event={"ID":"3a477965-934f-49c9-b6c0-dbe7cabd1179","Type":"ContainerStarted","Data":"7929b5f9200000bf6e142fd7af676d5f38ab22993a482d467a0effa19aec3b76"} Mar 08 03:40:30.257860 master-0 kubenswrapper[33141]: I0308 03:40:30.257839 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" event={"ID":"4e197bb4-4815-4c6d-93cd-cfe2a28160ef","Type":"ContainerDied","Data":"1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c"} Mar 08 03:40:30.257945 master-0 kubenswrapper[33141]: I0308 03:40:30.257861 33141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e513b7efea2d260ab457716d24aff908234df02a86782163f2978186518eb0c" Mar 08 03:40:30.258016 master-0 kubenswrapper[33141]: I0308 03:40:30.257950 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08j6s76" Mar 08 03:40:34.291966 master-0 kubenswrapper[33141]: I0308 03:40:34.291856 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" event={"ID":"3a477965-934f-49c9-b6c0-dbe7cabd1179","Type":"ContainerStarted","Data":"985ea13ba7bc9e0211b10a6266637634f54865a32bfa28f935538a57cb0b357f"} Mar 08 03:40:34.344858 master-0 kubenswrapper[33141]: I0308 03:40:34.344736 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-lgmll" podStartSLOduration=2.098144821 podStartE2EDuration="5.344698943s" podCreationTimestamp="2026-03-08 03:40:29 +0000 UTC" firstStartedPulling="2026-03-08 03:40:30.144243594 +0000 UTC m=+544.014136787" lastFinishedPulling="2026-03-08 03:40:33.390797716 +0000 UTC m=+547.260690909" observedRunningTime="2026-03-08 03:40:34.337068244 +0000 UTC m=+548.206961447" watchObservedRunningTime="2026-03-08 03:40:34.344698943 +0000 UTC m=+548.214592176" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.582998 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-b8tpj"] Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: E0308 03:40:37.584113 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="util" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.584136 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="util" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: E0308 03:40:37.584159 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="pull" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.584168 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="pull" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: E0308 03:40:37.584208 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="extract" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.584215 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="extract" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.584377 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e197bb4-4815-4c6d-93cd-cfe2a28160ef" containerName="extract" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.585331 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.588045 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 08 03:40:37.592946 master-0 kubenswrapper[33141]: I0308 03:40:37.588248 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 08 03:40:37.603004 master-0 kubenswrapper[33141]: I0308 03:40:37.602642 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-b8tpj"] Mar 08 03:40:37.624297 master-0 kubenswrapper[33141]: I0308 03:40:37.624255 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.624585 master-0 kubenswrapper[33141]: I0308 03:40:37.624550 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgq7\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-kube-api-access-9dgq7\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.731945 master-0 kubenswrapper[33141]: I0308 03:40:37.728287 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.731945 master-0 kubenswrapper[33141]: I0308 03:40:37.728448 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dgq7\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-kube-api-access-9dgq7\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.748938 master-0 kubenswrapper[33141]: I0308 03:40:37.746605 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.761944 master-0 kubenswrapper[33141]: I0308 03:40:37.760245 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dgq7\" (UniqueName: \"kubernetes.io/projected/b76c541b-0854-4509-a480-63908cd11269-kube-api-access-9dgq7\") pod \"cert-manager-webhook-6888856db4-b8tpj\" (UID: \"b76c541b-0854-4509-a480-63908cd11269\") " pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:37.923999 master-0 kubenswrapper[33141]: I0308 03:40:37.923217 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:38.421053 master-0 kubenswrapper[33141]: W0308 03:40:38.420804 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb76c541b_0854_4509_a480_63908cd11269.slice/crio-282f2abe2880f857413faa46938a6d77a8c8801c6873959748334f1c9f51e5f3 WatchSource:0}: Error finding container 282f2abe2880f857413faa46938a6d77a8c8801c6873959748334f1c9f51e5f3: Status 404 returned error can't find the container with id 282f2abe2880f857413faa46938a6d77a8c8801c6873959748334f1c9f51e5f3 Mar 08 03:40:38.421180 master-0 kubenswrapper[33141]: I0308 03:40:38.421069 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-b8tpj"] Mar 08 03:40:39.330207 master-0 kubenswrapper[33141]: I0308 03:40:39.330145 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" event={"ID":"b76c541b-0854-4509-a480-63908cd11269","Type":"ContainerStarted","Data":"282f2abe2880f857413faa46938a6d77a8c8801c6873959748334f1c9f51e5f3"} Mar 08 03:40:39.412207 master-0 kubenswrapper[33141]: I0308 03:40:39.412129 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-zs2k7"] Mar 08 03:40:39.413278 master-0 kubenswrapper[33141]: I0308 03:40:39.413246 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.429617 master-0 kubenswrapper[33141]: I0308 03:40:39.429559 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-zs2k7"] Mar 08 03:40:39.560382 master-0 kubenswrapper[33141]: I0308 03:40:39.560308 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.560636 master-0 kubenswrapper[33141]: I0308 03:40:39.560494 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfv89\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-kube-api-access-hfv89\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.670234 master-0 kubenswrapper[33141]: I0308 03:40:39.669658 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfv89\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-kube-api-access-hfv89\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.670234 master-0 kubenswrapper[33141]: I0308 03:40:39.670003 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.697831 master-0 kubenswrapper[33141]: I0308 03:40:39.697758 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.711194 master-0 kubenswrapper[33141]: I0308 03:40:39.711138 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfv89\" (UniqueName: \"kubernetes.io/projected/ccd87fae-c211-42ca-96ff-2631339fcfd3-kube-api-access-hfv89\") pod \"cert-manager-cainjector-5545bd876-zs2k7\" (UID: \"ccd87fae-c211-42ca-96ff-2631339fcfd3\") " pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:39.741271 master-0 kubenswrapper[33141]: I0308 03:40:39.741183 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" Mar 08 03:40:40.458346 master-0 kubenswrapper[33141]: I0308 03:40:40.458252 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-zs2k7"] Mar 08 03:40:41.297040 master-0 kubenswrapper[33141]: I0308 03:40:41.296973 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc"] Mar 08 03:40:41.298003 master-0 kubenswrapper[33141]: I0308 03:40:41.297921 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" Mar 08 03:40:41.299772 master-0 kubenswrapper[33141]: I0308 03:40:41.299665 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 08 03:40:41.301280 master-0 kubenswrapper[33141]: I0308 03:40:41.301127 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 08 03:40:41.312129 master-0 kubenswrapper[33141]: I0308 03:40:41.309782 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc"] Mar 08 03:40:41.354920 master-0 kubenswrapper[33141]: I0308 03:40:41.354856 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" event={"ID":"ccd87fae-c211-42ca-96ff-2631339fcfd3","Type":"ContainerStarted","Data":"79bb872f5ad7323a4809ba4c67434bb0614acd2b0cac92f8b3dde15557eff55a"} Mar 08 03:40:41.401645 master-0 kubenswrapper[33141]: I0308 03:40:41.400881 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rvpw\" (UniqueName: \"kubernetes.io/projected/fd3b4005-3ca5-4d51-b08e-0a71545c2990-kube-api-access-4rvpw\") pod \"nmstate-operator-75c5dccd6c-4rskc\" (UID: \"fd3b4005-3ca5-4d51-b08e-0a71545c2990\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" Mar 08 03:40:41.504400 master-0 kubenswrapper[33141]: I0308 03:40:41.504249 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rvpw\" (UniqueName: \"kubernetes.io/projected/fd3b4005-3ca5-4d51-b08e-0a71545c2990-kube-api-access-4rvpw\") pod \"nmstate-operator-75c5dccd6c-4rskc\" (UID: \"fd3b4005-3ca5-4d51-b08e-0a71545c2990\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" Mar 08 03:40:41.521818 master-0 kubenswrapper[33141]: I0308 03:40:41.521491 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rvpw\" (UniqueName: \"kubernetes.io/projected/fd3b4005-3ca5-4d51-b08e-0a71545c2990-kube-api-access-4rvpw\") pod \"nmstate-operator-75c5dccd6c-4rskc\" (UID: \"fd3b4005-3ca5-4d51-b08e-0a71545c2990\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" Mar 08 03:40:41.619587 master-0 kubenswrapper[33141]: I0308 03:40:41.619442 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" Mar 08 03:40:42.051593 master-0 kubenswrapper[33141]: I0308 03:40:42.051526 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc"] Mar 08 03:40:43.620011 master-0 kubenswrapper[33141]: W0308 03:40:43.619963 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd3b4005_3ca5_4d51_b08e_0a71545c2990.slice/crio-ea34d783db8363b49ed34119311101e5c0b01511acf5e20f8a953a32b288db35 WatchSource:0}: Error finding container ea34d783db8363b49ed34119311101e5c0b01511acf5e20f8a953a32b288db35: Status 404 returned error can't find the container with id ea34d783db8363b49ed34119311101e5c0b01511acf5e20f8a953a32b288db35 Mar 08 03:40:44.392733 master-0 kubenswrapper[33141]: I0308 03:40:44.392520 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" event={"ID":"b76c541b-0854-4509-a480-63908cd11269","Type":"ContainerStarted","Data":"6e5a08b25858c0a452aa20ff2812a11ed5689f5327a9a60d17eaccb887193533"} Mar 08 03:40:44.398236 master-0 kubenswrapper[33141]: I0308 03:40:44.393109 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:44.398236 master-0 kubenswrapper[33141]: I0308 03:40:44.396462 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" event={"ID":"fd3b4005-3ca5-4d51-b08e-0a71545c2990","Type":"ContainerStarted","Data":"ea34d783db8363b49ed34119311101e5c0b01511acf5e20f8a953a32b288db35"} Mar 08 03:40:44.398446 master-0 kubenswrapper[33141]: I0308 03:40:44.398395 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" event={"ID":"ccd87fae-c211-42ca-96ff-2631339fcfd3","Type":"ContainerStarted","Data":"298ed5f7f49c37bea9c826ca62d82b51341e95bee76f0b79e24bf9fe96342377"} Mar 08 03:40:44.414010 master-0 kubenswrapper[33141]: I0308 03:40:44.413929 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" podStartSLOduration=2.141668442 podStartE2EDuration="7.413897457s" podCreationTimestamp="2026-03-08 03:40:37 +0000 UTC" firstStartedPulling="2026-03-08 03:40:38.422523378 +0000 UTC m=+552.292416571" lastFinishedPulling="2026-03-08 03:40:43.694752393 +0000 UTC m=+557.564645586" observedRunningTime="2026-03-08 03:40:44.41284657 +0000 UTC m=+558.282739773" watchObservedRunningTime="2026-03-08 03:40:44.413897457 +0000 UTC m=+558.283790660" Mar 08 03:40:44.442817 master-0 kubenswrapper[33141]: I0308 03:40:44.442695 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-zs2k7" podStartSLOduration=2.187335986 podStartE2EDuration="5.442668196s" podCreationTimestamp="2026-03-08 03:40:39 +0000 UTC" firstStartedPulling="2026-03-08 03:40:40.45810136 +0000 UTC m=+554.327994563" lastFinishedPulling="2026-03-08 03:40:43.71343358 +0000 UTC m=+557.583326773" observedRunningTime="2026-03-08 03:40:44.436991178 +0000 UTC m=+558.306884452" watchObservedRunningTime="2026-03-08 03:40:44.442668196 +0000 UTC m=+558.312561409" Mar 08 03:40:47.421103 master-0 kubenswrapper[33141]: I0308 03:40:47.420995 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" event={"ID":"fd3b4005-3ca5-4d51-b08e-0a71545c2990","Type":"ContainerStarted","Data":"09f45239a540938a38ed729e792a505ae24cc3a4073a049d3d0b4430e5325f0b"} Mar 08 03:40:47.449816 master-0 kubenswrapper[33141]: I0308 03:40:47.449696 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-4rskc" podStartSLOduration=3.378477786 podStartE2EDuration="6.449673701s" podCreationTimestamp="2026-03-08 03:40:41 +0000 UTC" firstStartedPulling="2026-03-08 03:40:43.676424956 +0000 UTC m=+557.546318149" lastFinishedPulling="2026-03-08 03:40:46.747620861 +0000 UTC m=+560.617514064" observedRunningTime="2026-03-08 03:40:47.449012674 +0000 UTC m=+561.318905957" watchObservedRunningTime="2026-03-08 03:40:47.449673701 +0000 UTC m=+561.319566924" Mar 08 03:40:50.192307 master-0 kubenswrapper[33141]: I0308 03:40:50.192237 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4"] Mar 08 03:40:50.193205 master-0 kubenswrapper[33141]: I0308 03:40:50.193178 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.207239 master-0 kubenswrapper[33141]: I0308 03:40:50.207190 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 08 03:40:50.207456 master-0 kubenswrapper[33141]: I0308 03:40:50.207266 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 08 03:40:50.208091 master-0 kubenswrapper[33141]: I0308 03:40:50.208066 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 08 03:40:50.208172 master-0 kubenswrapper[33141]: I0308 03:40:50.208083 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 08 03:40:50.218507 master-0 kubenswrapper[33141]: I0308 03:40:50.218446 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4"] Mar 08 03:40:50.301630 master-0 kubenswrapper[33141]: I0308 03:40:50.301567 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4f66\" (UniqueName: \"kubernetes.io/projected/510c4395-781d-48ea-b253-247bc7bcc3f4-kube-api-access-n4f66\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.301859 master-0 kubenswrapper[33141]: I0308 03:40:50.301647 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-apiservice-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.301859 master-0 kubenswrapper[33141]: I0308 03:40:50.301709 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-webhook-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.402811 master-0 kubenswrapper[33141]: I0308 03:40:50.402749 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-apiservice-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.403080 master-0 kubenswrapper[33141]: I0308 03:40:50.402879 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-webhook-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.403080 master-0 kubenswrapper[33141]: I0308 03:40:50.402957 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4f66\" (UniqueName: \"kubernetes.io/projected/510c4395-781d-48ea-b253-247bc7bcc3f4-kube-api-access-n4f66\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.406386 master-0 kubenswrapper[33141]: I0308 03:40:50.406349 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-webhook-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.406811 master-0 kubenswrapper[33141]: I0308 03:40:50.406787 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/510c4395-781d-48ea-b253-247bc7bcc3f4-apiservice-cert\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.424809 master-0 kubenswrapper[33141]: I0308 03:40:50.424751 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4f66\" (UniqueName: \"kubernetes.io/projected/510c4395-781d-48ea-b253-247bc7bcc3f4-kube-api-access-n4f66\") pod \"metallb-operator-controller-manager-76b695cc4b-p6jt4\" (UID: \"510c4395-781d-48ea-b253-247bc7bcc3f4\") " pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.507101 master-0 kubenswrapper[33141]: I0308 03:40:50.507008 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:50.915349 master-0 kubenswrapper[33141]: I0308 03:40:50.913623 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf"] Mar 08 03:40:50.915349 master-0 kubenswrapper[33141]: I0308 03:40:50.914946 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:50.919925 master-0 kubenswrapper[33141]: I0308 03:40:50.917583 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 08 03:40:50.919925 master-0 kubenswrapper[33141]: I0308 03:40:50.917746 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 03:40:50.934433 master-0 kubenswrapper[33141]: I0308 03:40:50.930366 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf"] Mar 08 03:40:51.020867 master-0 kubenswrapper[33141]: I0308 03:40:51.020803 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clpvz\" (UniqueName: \"kubernetes.io/projected/60613c6d-80bd-4b7c-9560-69b983dd71df-kube-api-access-clpvz\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.021066 master-0 kubenswrapper[33141]: I0308 03:40:51.020933 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-webhook-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.021066 master-0 kubenswrapper[33141]: I0308 03:40:51.020983 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-apiservice-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.123393 master-0 kubenswrapper[33141]: I0308 03:40:51.122802 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-webhook-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.123393 master-0 kubenswrapper[33141]: I0308 03:40:51.122872 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-apiservice-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.123393 master-0 kubenswrapper[33141]: I0308 03:40:51.122919 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clpvz\" (UniqueName: \"kubernetes.io/projected/60613c6d-80bd-4b7c-9560-69b983dd71df-kube-api-access-clpvz\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.126264 master-0 kubenswrapper[33141]: I0308 03:40:51.126237 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-apiservice-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.126385 master-0 kubenswrapper[33141]: I0308 03:40:51.126284 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/60613c6d-80bd-4b7c-9560-69b983dd71df-webhook-cert\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.145293 master-0 kubenswrapper[33141]: I0308 03:40:51.143787 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clpvz\" (UniqueName: \"kubernetes.io/projected/60613c6d-80bd-4b7c-9560-69b983dd71df-kube-api-access-clpvz\") pod \"metallb-operator-webhook-server-58cf648889-6c6hf\" (UID: \"60613c6d-80bd-4b7c-9560-69b983dd71df\") " pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.175680 master-0 kubenswrapper[33141]: I0308 03:40:51.174336 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4"] Mar 08 03:40:51.289281 master-0 kubenswrapper[33141]: I0308 03:40:51.289226 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:51.518261 master-0 kubenswrapper[33141]: I0308 03:40:51.515506 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" event={"ID":"510c4395-781d-48ea-b253-247bc7bcc3f4","Type":"ContainerStarted","Data":"e7b1b5921363f41348c2d9600236260e35fed78cc2bfa3e10efc6f34fcdb4c65"} Mar 08 03:40:51.880723 master-0 kubenswrapper[33141]: I0308 03:40:51.880654 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf"] Mar 08 03:40:52.537164 master-0 kubenswrapper[33141]: I0308 03:40:52.537093 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" event={"ID":"60613c6d-80bd-4b7c-9560-69b983dd71df","Type":"ContainerStarted","Data":"97c72cfb22486ff0e7cb313865421343303eedbe8e4eee6dba43cf93cc5aa60d"} Mar 08 03:40:52.930507 master-0 kubenswrapper[33141]: I0308 03:40:52.930389 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-b8tpj" Mar 08 03:40:55.992009 master-0 kubenswrapper[33141]: I0308 03:40:55.990623 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-7wr8x"] Mar 08 03:40:55.992009 master-0 kubenswrapper[33141]: I0308 03:40:55.991562 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.005582 master-0 kubenswrapper[33141]: I0308 03:40:56.005507 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-7wr8x"] Mar 08 03:40:56.058965 master-0 kubenswrapper[33141]: I0308 03:40:56.058890 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb8zj\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-kube-api-access-gb8zj\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.059232 master-0 kubenswrapper[33141]: I0308 03:40:56.059048 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-bound-sa-token\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.161404 master-0 kubenswrapper[33141]: I0308 03:40:56.161136 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-bound-sa-token\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.161404 master-0 kubenswrapper[33141]: I0308 03:40:56.161227 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb8zj\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-kube-api-access-gb8zj\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.177847 master-0 kubenswrapper[33141]: I0308 03:40:56.177039 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-bound-sa-token\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.179369 master-0 kubenswrapper[33141]: I0308 03:40:56.179299 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb8zj\" (UniqueName: \"kubernetes.io/projected/420b9a36-158d-4468-924e-074e0e2c4f5c-kube-api-access-gb8zj\") pod \"cert-manager-545d4d4674-7wr8x\" (UID: \"420b9a36-158d-4468-924e-074e0e2c4f5c\") " pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.338927 master-0 kubenswrapper[33141]: I0308 03:40:56.331892 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-7wr8x" Mar 08 03:40:56.597475 master-0 kubenswrapper[33141]: I0308 03:40:56.597367 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" event={"ID":"510c4395-781d-48ea-b253-247bc7bcc3f4","Type":"ContainerStarted","Data":"f020a7f801ee1c46c402268e7895773dd4f29d737e82f664fcf112b7497982cb"} Mar 08 03:40:56.598009 master-0 kubenswrapper[33141]: I0308 03:40:56.597943 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:40:56.627405 master-0 kubenswrapper[33141]: I0308 03:40:56.624353 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" podStartSLOduration=2.24741996 podStartE2EDuration="6.624336833s" podCreationTimestamp="2026-03-08 03:40:50 +0000 UTC" firstStartedPulling="2026-03-08 03:40:51.208094509 +0000 UTC m=+565.077987702" lastFinishedPulling="2026-03-08 03:40:55.585011372 +0000 UTC m=+569.454904575" observedRunningTime="2026-03-08 03:40:56.61769342 +0000 UTC m=+570.487586613" watchObservedRunningTime="2026-03-08 03:40:56.624336833 +0000 UTC m=+570.494230026" Mar 08 03:40:57.607390 master-0 kubenswrapper[33141]: I0308 03:40:57.607345 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l"] Mar 08 03:40:57.608816 master-0 kubenswrapper[33141]: I0308 03:40:57.608799 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" Mar 08 03:40:57.624381 master-0 kubenswrapper[33141]: I0308 03:40:57.618731 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 08 03:40:57.643050 master-0 kubenswrapper[33141]: I0308 03:40:57.642680 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 08 03:40:57.677961 master-0 kubenswrapper[33141]: I0308 03:40:57.671765 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l"] Mar 08 03:40:57.701928 master-0 kubenswrapper[33141]: I0308 03:40:57.698839 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsjjv\" (UniqueName: \"kubernetes.io/projected/97c86970-ecaa-4aef-86b3-9a514a1de075-kube-api-access-hsjjv\") pod \"obo-prometheus-operator-68bc856cb9-ffz9l\" (UID: \"97c86970-ecaa-4aef-86b3-9a514a1de075\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" Mar 08 03:40:57.775279 master-0 kubenswrapper[33141]: I0308 03:40:57.775219 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5"] Mar 08 03:40:57.787319 master-0 kubenswrapper[33141]: I0308 03:40:57.787230 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:57.792930 master-0 kubenswrapper[33141]: I0308 03:40:57.790798 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 08 03:40:57.803926 master-0 kubenswrapper[33141]: I0308 03:40:57.801085 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsjjv\" (UniqueName: \"kubernetes.io/projected/97c86970-ecaa-4aef-86b3-9a514a1de075-kube-api-access-hsjjv\") pod \"obo-prometheus-operator-68bc856cb9-ffz9l\" (UID: \"97c86970-ecaa-4aef-86b3-9a514a1de075\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" Mar 08 03:40:57.811084 master-0 kubenswrapper[33141]: I0308 03:40:57.811026 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5"] Mar 08 03:40:57.827548 master-0 kubenswrapper[33141]: I0308 03:40:57.827497 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8"] Mar 08 03:40:57.832481 master-0 kubenswrapper[33141]: I0308 03:40:57.832445 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:57.839747 master-0 kubenswrapper[33141]: I0308 03:40:57.839706 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsjjv\" (UniqueName: \"kubernetes.io/projected/97c86970-ecaa-4aef-86b3-9a514a1de075-kube-api-access-hsjjv\") pod \"obo-prometheus-operator-68bc856cb9-ffz9l\" (UID: \"97c86970-ecaa-4aef-86b3-9a514a1de075\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" Mar 08 03:40:57.852984 master-0 kubenswrapper[33141]: I0308 03:40:57.852950 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8"] Mar 08 03:40:57.909982 master-0 kubenswrapper[33141]: I0308 03:40:57.906064 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:57.909982 master-0 kubenswrapper[33141]: I0308 03:40:57.906156 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:57.909982 master-0 kubenswrapper[33141]: I0308 03:40:57.906325 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:57.909982 master-0 kubenswrapper[33141]: I0308 03:40:57.906376 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:57.929413 master-0 kubenswrapper[33141]: I0308 03:40:57.928860 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-v9hk7"] Mar 08 03:40:57.938933 master-0 kubenswrapper[33141]: I0308 03:40:57.930386 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:57.938933 master-0 kubenswrapper[33141]: I0308 03:40:57.935341 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 08 03:40:57.949853 master-0 kubenswrapper[33141]: I0308 03:40:57.947096 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-v9hk7"] Mar 08 03:40:58.003937 master-0 kubenswrapper[33141]: I0308 03:40:58.000745 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" Mar 08 03:40:58.008498 master-0 kubenswrapper[33141]: I0308 03:40:58.008433 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13879810-602c-43af-a881-54d18130c358-observability-operator-tls\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.011931 master-0 kubenswrapper[33141]: I0308 03:40:58.008672 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:58.011931 master-0 kubenswrapper[33141]: I0308 03:40:58.008755 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:58.011931 master-0 kubenswrapper[33141]: I0308 03:40:58.008809 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbhrr\" (UniqueName: \"kubernetes.io/projected/13879810-602c-43af-a881-54d18130c358-kube-api-access-dbhrr\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.011931 master-0 kubenswrapper[33141]: I0308 03:40:58.008893 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:58.011931 master-0 kubenswrapper[33141]: I0308 03:40:58.008945 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:58.014976 master-0 kubenswrapper[33141]: I0308 03:40:58.012364 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:58.014976 master-0 kubenswrapper[33141]: I0308 03:40:58.014304 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:58.014976 master-0 kubenswrapper[33141]: I0308 03:40:58.014644 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1cf5f791-400d-4e37-8a8c-5c28d9fbb166-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8\" (UID: \"1cf5f791-400d-4e37-8a8c-5c28d9fbb166\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:58.029892 master-0 kubenswrapper[33141]: I0308 03:40:58.028677 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/52519993-fb19-4251-96d1-3e9034236626-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5\" (UID: \"52519993-fb19-4251-96d1-3e9034236626\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:58.085417 master-0 kubenswrapper[33141]: I0308 03:40:58.084500 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqxdk"] Mar 08 03:40:58.085858 master-0 kubenswrapper[33141]: I0308 03:40:58.085836 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.096054 master-0 kubenswrapper[33141]: I0308 03:40:58.095993 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqxdk"] Mar 08 03:40:58.111508 master-0 kubenswrapper[33141]: I0308 03:40:58.110892 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbhrr\" (UniqueName: \"kubernetes.io/projected/13879810-602c-43af-a881-54d18130c358-kube-api-access-dbhrr\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.111508 master-0 kubenswrapper[33141]: I0308 03:40:58.110995 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13879810-602c-43af-a881-54d18130c358-observability-operator-tls\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.116087 master-0 kubenswrapper[33141]: I0308 03:40:58.115637 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/13879810-602c-43af-a881-54d18130c358-observability-operator-tls\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.136243 master-0 kubenswrapper[33141]: I0308 03:40:58.133601 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbhrr\" (UniqueName: \"kubernetes.io/projected/13879810-602c-43af-a881-54d18130c358-kube-api-access-dbhrr\") pod \"observability-operator-59bdc8b94-v9hk7\" (UID: \"13879810-602c-43af-a881-54d18130c358\") " pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.170037 master-0 kubenswrapper[33141]: I0308 03:40:58.169895 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" Mar 08 03:40:58.213778 master-0 kubenswrapper[33141]: I0308 03:40:58.213699 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/607a5d1b-0fde-4771-afe2-9705030fe181-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.214080 master-0 kubenswrapper[33141]: I0308 03:40:58.214058 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s967g\" (UniqueName: \"kubernetes.io/projected/607a5d1b-0fde-4771-afe2-9705030fe181-kube-api-access-s967g\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.226190 master-0 kubenswrapper[33141]: I0308 03:40:58.225530 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" Mar 08 03:40:58.290896 master-0 kubenswrapper[33141]: I0308 03:40:58.290436 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:40:58.321185 master-0 kubenswrapper[33141]: I0308 03:40:58.316058 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/607a5d1b-0fde-4771-afe2-9705030fe181-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.321185 master-0 kubenswrapper[33141]: I0308 03:40:58.316167 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s967g\" (UniqueName: \"kubernetes.io/projected/607a5d1b-0fde-4771-afe2-9705030fe181-kube-api-access-s967g\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.321185 master-0 kubenswrapper[33141]: I0308 03:40:58.317851 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/607a5d1b-0fde-4771-afe2-9705030fe181-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.343477 master-0 kubenswrapper[33141]: I0308 03:40:58.343432 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s967g\" (UniqueName: \"kubernetes.io/projected/607a5d1b-0fde-4771-afe2-9705030fe181-kube-api-access-s967g\") pod \"perses-operator-5bf474d74f-tqxdk\" (UID: \"607a5d1b-0fde-4771-afe2-9705030fe181\") " pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:58.485404 master-0 kubenswrapper[33141]: I0308 03:40:58.485285 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:40:59.060675 master-0 kubenswrapper[33141]: I0308 03:40:59.060616 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l"] Mar 08 03:40:59.062196 master-0 kubenswrapper[33141]: W0308 03:40:59.062144 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97c86970_ecaa_4aef_86b3_9a514a1de075.slice/crio-5b8f3ae202f1fe092c9ee25a2f07aa869246cb4223afa02bd06ff263e0da04e2 WatchSource:0}: Error finding container 5b8f3ae202f1fe092c9ee25a2f07aa869246cb4223afa02bd06ff263e0da04e2: Status 404 returned error can't find the container with id 5b8f3ae202f1fe092c9ee25a2f07aa869246cb4223afa02bd06ff263e0da04e2 Mar 08 03:40:59.174877 master-0 kubenswrapper[33141]: I0308 03:40:59.174823 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5"] Mar 08 03:40:59.175468 master-0 kubenswrapper[33141]: W0308 03:40:59.175432 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52519993_fb19_4251_96d1_3e9034236626.slice/crio-ad1050ba40d1ccd979365f3b7226b4014ee31a3339cc594b190aa8a536d96115 WatchSource:0}: Error finding container ad1050ba40d1ccd979365f3b7226b4014ee31a3339cc594b190aa8a536d96115: Status 404 returned error can't find the container with id ad1050ba40d1ccd979365f3b7226b4014ee31a3339cc594b190aa8a536d96115 Mar 08 03:40:59.184667 master-0 kubenswrapper[33141]: I0308 03:40:59.184626 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-7wr8x"] Mar 08 03:40:59.191795 master-0 kubenswrapper[33141]: W0308 03:40:59.191712 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf5f791_400d_4e37_8a8c_5c28d9fbb166.slice/crio-b611041054f637c6d03c0c070969c7041b308195431afdc8135fc949124e1881 WatchSource:0}: Error finding container b611041054f637c6d03c0c070969c7041b308195431afdc8135fc949124e1881: Status 404 returned error can't find the container with id b611041054f637c6d03c0c070969c7041b308195431afdc8135fc949124e1881 Mar 08 03:40:59.203289 master-0 kubenswrapper[33141]: I0308 03:40:59.201887 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8"] Mar 08 03:40:59.412155 master-0 kubenswrapper[33141]: I0308 03:40:59.412111 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-v9hk7"] Mar 08 03:40:59.434474 master-0 kubenswrapper[33141]: I0308 03:40:59.434424 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tqxdk"] Mar 08 03:40:59.438934 master-0 kubenswrapper[33141]: W0308 03:40:59.434853 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod607a5d1b_0fde_4771_afe2_9705030fe181.slice/crio-0be454543774eda250514cc88a3c4af1693caafc774f08dddb9a1bb1638d7eca WatchSource:0}: Error finding container 0be454543774eda250514cc88a3c4af1693caafc774f08dddb9a1bb1638d7eca: Status 404 returned error can't find the container with id 0be454543774eda250514cc88a3c4af1693caafc774f08dddb9a1bb1638d7eca Mar 08 03:40:59.664938 master-0 kubenswrapper[33141]: I0308 03:40:59.664822 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" event={"ID":"13879810-602c-43af-a881-54d18130c358","Type":"ContainerStarted","Data":"4a40705819e151ff82bf2e55827dd5016bc800fca1c45e6589e4eca863da0dde"} Mar 08 03:40:59.665972 master-0 kubenswrapper[33141]: I0308 03:40:59.665889 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" event={"ID":"52519993-fb19-4251-96d1-3e9034236626","Type":"ContainerStarted","Data":"ad1050ba40d1ccd979365f3b7226b4014ee31a3339cc594b190aa8a536d96115"} Mar 08 03:40:59.667289 master-0 kubenswrapper[33141]: I0308 03:40:59.667263 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" event={"ID":"60613c6d-80bd-4b7c-9560-69b983dd71df","Type":"ContainerStarted","Data":"690a7e8db6aa0b591e3a1733545b43e9d8dea8dcdacb57b38ddb34eed842522a"} Mar 08 03:40:59.667470 master-0 kubenswrapper[33141]: I0308 03:40:59.667430 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:40:59.668385 master-0 kubenswrapper[33141]: I0308 03:40:59.668358 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" event={"ID":"607a5d1b-0fde-4771-afe2-9705030fe181","Type":"ContainerStarted","Data":"0be454543774eda250514cc88a3c4af1693caafc774f08dddb9a1bb1638d7eca"} Mar 08 03:40:59.669475 master-0 kubenswrapper[33141]: I0308 03:40:59.669429 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" event={"ID":"97c86970-ecaa-4aef-86b3-9a514a1de075","Type":"ContainerStarted","Data":"5b8f3ae202f1fe092c9ee25a2f07aa869246cb4223afa02bd06ff263e0da04e2"} Mar 08 03:40:59.673689 master-0 kubenswrapper[33141]: I0308 03:40:59.673649 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-7wr8x" event={"ID":"420b9a36-158d-4468-924e-074e0e2c4f5c","Type":"ContainerStarted","Data":"0726952ede8585ac05d4d9ef8b39b99a216662facd83243d3c27d44b25699da4"} Mar 08 03:40:59.673804 master-0 kubenswrapper[33141]: I0308 03:40:59.673694 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-7wr8x" event={"ID":"420b9a36-158d-4468-924e-074e0e2c4f5c","Type":"ContainerStarted","Data":"e023125c65286da2604acd4c2844b0965175623a9476df5a6edc70c49aa5442d"} Mar 08 03:40:59.674680 master-0 kubenswrapper[33141]: I0308 03:40:59.674642 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" event={"ID":"1cf5f791-400d-4e37-8a8c-5c28d9fbb166","Type":"ContainerStarted","Data":"b611041054f637c6d03c0c070969c7041b308195431afdc8135fc949124e1881"} Mar 08 03:40:59.699597 master-0 kubenswrapper[33141]: I0308 03:40:59.699525 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" podStartSLOduration=2.974483522 podStartE2EDuration="9.699507602s" podCreationTimestamp="2026-03-08 03:40:50 +0000 UTC" firstStartedPulling="2026-03-08 03:40:51.884106451 +0000 UTC m=+565.753999634" lastFinishedPulling="2026-03-08 03:40:58.609130521 +0000 UTC m=+572.479023714" observedRunningTime="2026-03-08 03:40:59.698683441 +0000 UTC m=+573.568576664" watchObservedRunningTime="2026-03-08 03:40:59.699507602 +0000 UTC m=+573.569400785" Mar 08 03:40:59.727388 master-0 kubenswrapper[33141]: I0308 03:40:59.727313 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-7wr8x" podStartSLOduration=4.7272967359999996 podStartE2EDuration="4.727296736s" podCreationTimestamp="2026-03-08 03:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:40:59.722577293 +0000 UTC m=+573.592470486" watchObservedRunningTime="2026-03-08 03:40:59.727296736 +0000 UTC m=+573.597189929" Mar 08 03:41:11.295723 master-0 kubenswrapper[33141]: I0308 03:41:11.295650 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58cf648889-6c6hf" Mar 08 03:41:11.795356 master-0 kubenswrapper[33141]: I0308 03:41:11.794694 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" event={"ID":"607a5d1b-0fde-4771-afe2-9705030fe181","Type":"ContainerStarted","Data":"87f1a2748e73cc4e48a3a47d79fba65eb1f7896d19ebc42bbcc7d7d5804f4988"} Mar 08 03:41:11.795356 master-0 kubenswrapper[33141]: I0308 03:41:11.794834 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:41:11.796575 master-0 kubenswrapper[33141]: I0308 03:41:11.796450 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" event={"ID":"97c86970-ecaa-4aef-86b3-9a514a1de075","Type":"ContainerStarted","Data":"b862622b1cb92b4dc1d4bd5e6e573afcc0d3a190ec46f1bc5bc803da4cb83c06"} Mar 08 03:41:11.799302 master-0 kubenswrapper[33141]: I0308 03:41:11.799255 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" event={"ID":"1cf5f791-400d-4e37-8a8c-5c28d9fbb166","Type":"ContainerStarted","Data":"d121fee58de0992e5c2d8a3a64cdd70d9081c9a4183ca425ded01edb504ed056"} Mar 08 03:41:11.801011 master-0 kubenswrapper[33141]: I0308 03:41:11.800946 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" event={"ID":"13879810-602c-43af-a881-54d18130c358","Type":"ContainerStarted","Data":"97732442c60355d444c15b6913242bce948beb0d04d82906bcc70ddf35fb80e3"} Mar 08 03:41:11.801108 master-0 kubenswrapper[33141]: I0308 03:41:11.801032 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:41:11.803157 master-0 kubenswrapper[33141]: I0308 03:41:11.802797 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" event={"ID":"52519993-fb19-4251-96d1-3e9034236626","Type":"ContainerStarted","Data":"12acf0d0d35b34522c00bcab6f37a6425d66c793384f8259d99126460bbc2d03"} Mar 08 03:41:11.814932 master-0 kubenswrapper[33141]: I0308 03:41:11.814849 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" podStartSLOduration=2.146636624 podStartE2EDuration="13.814814892s" podCreationTimestamp="2026-03-08 03:40:58 +0000 UTC" firstStartedPulling="2026-03-08 03:40:59.450061857 +0000 UTC m=+573.319955050" lastFinishedPulling="2026-03-08 03:41:11.118240125 +0000 UTC m=+584.988133318" observedRunningTime="2026-03-08 03:41:11.811884415 +0000 UTC m=+585.681777618" watchObservedRunningTime="2026-03-08 03:41:11.814814892 +0000 UTC m=+585.684708085" Mar 08 03:41:11.842823 master-0 kubenswrapper[33141]: I0308 03:41:11.842739 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" podStartSLOduration=3.203257599 podStartE2EDuration="14.842721638s" podCreationTimestamp="2026-03-08 03:40:57 +0000 UTC" firstStartedPulling="2026-03-08 03:40:59.423000413 +0000 UTC m=+573.292893606" lastFinishedPulling="2026-03-08 03:41:11.062464452 +0000 UTC m=+584.932357645" observedRunningTime="2026-03-08 03:41:11.838146339 +0000 UTC m=+585.708039542" watchObservedRunningTime="2026-03-08 03:41:11.842721638 +0000 UTC m=+585.712614831" Mar 08 03:41:11.864992 master-0 kubenswrapper[33141]: I0308 03:41:11.864895 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-ffz9l" podStartSLOduration=2.866989313 podStartE2EDuration="14.864876245s" podCreationTimestamp="2026-03-08 03:40:57 +0000 UTC" firstStartedPulling="2026-03-08 03:40:59.064730964 +0000 UTC m=+572.934624167" lastFinishedPulling="2026-03-08 03:41:11.062617866 +0000 UTC m=+584.932511099" observedRunningTime="2026-03-08 03:41:11.861585459 +0000 UTC m=+585.731478652" watchObservedRunningTime="2026-03-08 03:41:11.864876245 +0000 UTC m=+585.734769438" Mar 08 03:41:11.869622 master-0 kubenswrapper[33141]: I0308 03:41:11.869579 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-v9hk7" Mar 08 03:41:11.898430 master-0 kubenswrapper[33141]: I0308 03:41:11.898342 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5" podStartSLOduration=2.964168994 podStartE2EDuration="14.898322936s" podCreationTimestamp="2026-03-08 03:40:57 +0000 UTC" firstStartedPulling="2026-03-08 03:40:59.182116831 +0000 UTC m=+573.052010024" lastFinishedPulling="2026-03-08 03:41:11.116270763 +0000 UTC m=+584.986163966" observedRunningTime="2026-03-08 03:41:11.895613025 +0000 UTC m=+585.765506228" watchObservedRunningTime="2026-03-08 03:41:11.898322936 +0000 UTC m=+585.768216129" Mar 08 03:41:11.931792 master-0 kubenswrapper[33141]: I0308 03:41:11.931699 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8" podStartSLOduration=3.066732273 podStartE2EDuration="14.931679824s" podCreationTimestamp="2026-03-08 03:40:57 +0000 UTC" firstStartedPulling="2026-03-08 03:40:59.197186203 +0000 UTC m=+573.067079396" lastFinishedPulling="2026-03-08 03:41:11.062133744 +0000 UTC m=+584.932026947" observedRunningTime="2026-03-08 03:41:11.922157347 +0000 UTC m=+585.792050570" watchObservedRunningTime="2026-03-08 03:41:11.931679824 +0000 UTC m=+585.801573007" Mar 08 03:41:18.487695 master-0 kubenswrapper[33141]: I0308 03:41:18.487655 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tqxdk" Mar 08 03:41:30.510936 master-0 kubenswrapper[33141]: I0308 03:41:30.510758 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76b695cc4b-p6jt4" Mar 08 03:41:38.475318 master-0 kubenswrapper[33141]: I0308 03:41:38.475239 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv"] Mar 08 03:41:38.476566 master-0 kubenswrapper[33141]: I0308 03:41:38.476392 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.480083 master-0 kubenswrapper[33141]: I0308 03:41:38.479847 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 08 03:41:38.484303 master-0 kubenswrapper[33141]: I0308 03:41:38.483810 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-mhfnq"] Mar 08 03:41:38.491958 master-0 kubenswrapper[33141]: I0308 03:41:38.491899 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30678329-c9f2-4958-9b2d-6bacd9250bbe-cert\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.491958 master-0 kubenswrapper[33141]: I0308 03:41:38.491956 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tzsl\" (UniqueName: \"kubernetes.io/projected/30678329-c9f2-4958-9b2d-6bacd9250bbe-kube-api-access-5tzsl\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.535964 master-0 kubenswrapper[33141]: I0308 03:41:38.535917 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv"] Mar 08 03:41:38.536175 master-0 kubenswrapper[33141]: I0308 03:41:38.536054 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.538791 master-0 kubenswrapper[33141]: I0308 03:41:38.538612 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 08 03:41:38.538989 master-0 kubenswrapper[33141]: I0308 03:41:38.538804 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 08 03:41:38.597502 master-0 kubenswrapper[33141]: I0308 03:41:38.597354 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c1220927-804a-457f-81bf-e599bac8f203-frr-startup\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598723 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1220927-804a-457f-81bf-e599bac8f203-metrics-certs\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598784 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-conf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598814 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30678329-c9f2-4958-9b2d-6bacd9250bbe-cert\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598830 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tzsl\" (UniqueName: \"kubernetes.io/projected/30678329-c9f2-4958-9b2d-6bacd9250bbe-kube-api-access-5tzsl\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598851 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-reloader\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598882 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-metrics\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598936 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-sockets\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.601928 master-0 kubenswrapper[33141]: I0308 03:41:38.598958 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf5vf\" (UniqueName: \"kubernetes.io/projected/c1220927-804a-457f-81bf-e599bac8f203-kube-api-access-vf5vf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.606952 master-0 kubenswrapper[33141]: I0308 03:41:38.603620 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/30678329-c9f2-4958-9b2d-6bacd9250bbe-cert\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.634925 master-0 kubenswrapper[33141]: I0308 03:41:38.632486 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jhqp7"] Mar 08 03:41:38.634925 master-0 kubenswrapper[33141]: I0308 03:41:38.633667 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tzsl\" (UniqueName: \"kubernetes.io/projected/30678329-c9f2-4958-9b2d-6bacd9250bbe-kube-api-access-5tzsl\") pod \"frr-k8s-webhook-server-7f989f654f-hjjnv\" (UID: \"30678329-c9f2-4958-9b2d-6bacd9250bbe\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.634925 master-0 kubenswrapper[33141]: I0308 03:41:38.634246 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.644925 master-0 kubenswrapper[33141]: I0308 03:41:38.638127 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 08 03:41:38.644925 master-0 kubenswrapper[33141]: I0308 03:41:38.638374 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 08 03:41:38.644925 master-0 kubenswrapper[33141]: I0308 03:41:38.638509 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 08 03:41:38.648959 master-0 kubenswrapper[33141]: I0308 03:41:38.647156 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-v6lcp"] Mar 08 03:41:38.649121 master-0 kubenswrapper[33141]: I0308 03:41:38.649038 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.653922 master-0 kubenswrapper[33141]: I0308 03:41:38.650881 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 08 03:41:38.673927 master-0 kubenswrapper[33141]: I0308 03:41:38.670017 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-v6lcp"] Mar 08 03:41:38.704735 master-0 kubenswrapper[33141]: I0308 03:41:38.704283 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f67d\" (UniqueName: \"kubernetes.io/projected/78826ab3-1b89-4efe-9986-38e67fc8b8f1-kube-api-access-2f67d\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.705015 master-0 kubenswrapper[33141]: I0308 03:41:38.704400 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-cert\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.705084 master-0 kubenswrapper[33141]: I0308 03:41:38.705051 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-sockets\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705252 master-0 kubenswrapper[33141]: I0308 03:41:38.705080 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thspg\" (UniqueName: \"kubernetes.io/projected/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-kube-api-access-thspg\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.705252 master-0 kubenswrapper[33141]: I0308 03:41:38.705105 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf5vf\" (UniqueName: \"kubernetes.io/projected/c1220927-804a-457f-81bf-e599bac8f203-kube-api-access-vf5vf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705252 master-0 kubenswrapper[33141]: I0308 03:41:38.705200 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.705437 master-0 kubenswrapper[33141]: I0308 03:41:38.705285 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-metrics-certs\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.705437 master-0 kubenswrapper[33141]: I0308 03:41:38.705320 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metallb-excludel2\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.705437 master-0 kubenswrapper[33141]: I0308 03:41:38.705350 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c1220927-804a-457f-81bf-e599bac8f203-frr-startup\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705437 master-0 kubenswrapper[33141]: I0308 03:41:38.705406 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metrics-certs\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.705555 master-0 kubenswrapper[33141]: I0308 03:41:38.705446 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1220927-804a-457f-81bf-e599bac8f203-metrics-certs\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705555 master-0 kubenswrapper[33141]: I0308 03:41:38.705492 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-conf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705555 master-0 kubenswrapper[33141]: I0308 03:41:38.705526 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-reloader\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.705650 master-0 kubenswrapper[33141]: I0308 03:41:38.705566 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-metrics\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.710988 master-0 kubenswrapper[33141]: I0308 03:41:38.706069 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-sockets\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.710988 master-0 kubenswrapper[33141]: I0308 03:41:38.707180 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c1220927-804a-457f-81bf-e599bac8f203-frr-startup\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.710988 master-0 kubenswrapper[33141]: I0308 03:41:38.708642 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-frr-conf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.710988 master-0 kubenswrapper[33141]: I0308 03:41:38.708867 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-reloader\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.710988 master-0 kubenswrapper[33141]: I0308 03:41:38.710151 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c1220927-804a-457f-81bf-e599bac8f203-metrics\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.711914 master-0 kubenswrapper[33141]: I0308 03:41:38.711469 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1220927-804a-457f-81bf-e599bac8f203-metrics-certs\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.731682 master-0 kubenswrapper[33141]: I0308 03:41:38.731564 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf5vf\" (UniqueName: \"kubernetes.io/projected/c1220927-804a-457f-81bf-e599bac8f203-kube-api-access-vf5vf\") pod \"frr-k8s-mhfnq\" (UID: \"c1220927-804a-457f-81bf-e599bac8f203\") " pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:38.807416 master-0 kubenswrapper[33141]: I0308 03:41:38.807320 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f67d\" (UniqueName: \"kubernetes.io/projected/78826ab3-1b89-4efe-9986-38e67fc8b8f1-kube-api-access-2f67d\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.807416 master-0 kubenswrapper[33141]: I0308 03:41:38.807385 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-cert\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.807789 master-0 kubenswrapper[33141]: I0308 03:41:38.807742 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thspg\" (UniqueName: \"kubernetes.io/projected/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-kube-api-access-thspg\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.808328 master-0 kubenswrapper[33141]: I0308 03:41:38.808310 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.808492 master-0 kubenswrapper[33141]: I0308 03:41:38.808478 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-metrics-certs\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.808580 master-0 kubenswrapper[33141]: I0308 03:41:38.808568 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metallb-excludel2\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.808724 master-0 kubenswrapper[33141]: I0308 03:41:38.808711 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metrics-certs\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.809133 master-0 kubenswrapper[33141]: E0308 03:41:38.809096 33141 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 03:41:38.809195 master-0 kubenswrapper[33141]: E0308 03:41:38.809163 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist podName:b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d nodeName:}" failed. No retries permitted until 2026-03-08 03:41:39.309145911 +0000 UTC m=+613.179039104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist") pod "speaker-jhqp7" (UID: "b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d") : secret "metallb-memberlist" not found Mar 08 03:41:38.809344 master-0 kubenswrapper[33141]: I0308 03:41:38.809304 33141 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 03:41:38.809837 master-0 kubenswrapper[33141]: I0308 03:41:38.809818 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metallb-excludel2\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.812477 master-0 kubenswrapper[33141]: I0308 03:41:38.812444 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-metrics-certs\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.812684 master-0 kubenswrapper[33141]: I0308 03:41:38.812655 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-metrics-certs\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.820780 master-0 kubenswrapper[33141]: I0308 03:41:38.820748 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/78826ab3-1b89-4efe-9986-38e67fc8b8f1-cert\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.821825 master-0 kubenswrapper[33141]: I0308 03:41:38.821783 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thspg\" (UniqueName: \"kubernetes.io/projected/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-kube-api-access-thspg\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:38.824886 master-0 kubenswrapper[33141]: I0308 03:41:38.824711 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f67d\" (UniqueName: \"kubernetes.io/projected/78826ab3-1b89-4efe-9986-38e67fc8b8f1-kube-api-access-2f67d\") pod \"controller-86ddb6bd46-v6lcp\" (UID: \"78826ab3-1b89-4efe-9986-38e67fc8b8f1\") " pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:38.871951 master-0 kubenswrapper[33141]: I0308 03:41:38.871886 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:38.887274 master-0 kubenswrapper[33141]: I0308 03:41:38.887236 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:39.011996 master-0 kubenswrapper[33141]: I0308 03:41:39.011935 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:39.068197 master-0 kubenswrapper[33141]: I0308 03:41:39.065961 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"4cc95477d94ce03abf2b4b5de3f3ef65873150887508257ac6112eb50907310d"} Mar 08 03:41:39.289878 master-0 kubenswrapper[33141]: W0308 03:41:39.289824 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30678329_c9f2_4958_9b2d_6bacd9250bbe.slice/crio-46522eb88bccae9cfddcbfbcd5c55887a4b49d88d854691c6e2634aedf5100e3 WatchSource:0}: Error finding container 46522eb88bccae9cfddcbfbcd5c55887a4b49d88d854691c6e2634aedf5100e3: Status 404 returned error can't find the container with id 46522eb88bccae9cfddcbfbcd5c55887a4b49d88d854691c6e2634aedf5100e3 Mar 08 03:41:39.291200 master-0 kubenswrapper[33141]: I0308 03:41:39.291162 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv"] Mar 08 03:41:39.325519 master-0 kubenswrapper[33141]: I0308 03:41:39.325454 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:39.325789 master-0 kubenswrapper[33141]: E0308 03:41:39.325733 33141 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 03:41:39.325877 master-0 kubenswrapper[33141]: E0308 03:41:39.325853 33141 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist podName:b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d nodeName:}" failed. No retries permitted until 2026-03-08 03:41:40.325826317 +0000 UTC m=+614.195719520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist") pod "speaker-jhqp7" (UID: "b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d") : secret "metallb-memberlist" not found Mar 08 03:41:39.450766 master-0 kubenswrapper[33141]: I0308 03:41:39.450703 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-v6lcp"] Mar 08 03:41:39.455183 master-0 kubenswrapper[33141]: W0308 03:41:39.455130 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78826ab3_1b89_4efe_9986_38e67fc8b8f1.slice/crio-3c79d648f09bc9084f8d048a0a2f62835fde8a4355be3923f8d2db8674fb4d94 WatchSource:0}: Error finding container 3c79d648f09bc9084f8d048a0a2f62835fde8a4355be3923f8d2db8674fb4d94: Status 404 returned error can't find the container with id 3c79d648f09bc9084f8d048a0a2f62835fde8a4355be3923f8d2db8674fb4d94 Mar 08 03:41:40.082416 master-0 kubenswrapper[33141]: I0308 03:41:40.082348 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" event={"ID":"30678329-c9f2-4958-9b2d-6bacd9250bbe","Type":"ContainerStarted","Data":"46522eb88bccae9cfddcbfbcd5c55887a4b49d88d854691c6e2634aedf5100e3"} Mar 08 03:41:40.084657 master-0 kubenswrapper[33141]: I0308 03:41:40.084606 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v6lcp" event={"ID":"78826ab3-1b89-4efe-9986-38e67fc8b8f1","Type":"ContainerStarted","Data":"3bcb112062211067949274f50bd0b981dd4a7b1e213c45bff87dd4ae2b725f15"} Mar 08 03:41:40.084726 master-0 kubenswrapper[33141]: I0308 03:41:40.084658 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v6lcp" event={"ID":"78826ab3-1b89-4efe-9986-38e67fc8b8f1","Type":"ContainerStarted","Data":"3c79d648f09bc9084f8d048a0a2f62835fde8a4355be3923f8d2db8674fb4d94"} Mar 08 03:41:40.347448 master-0 kubenswrapper[33141]: I0308 03:41:40.347194 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:40.350637 master-0 kubenswrapper[33141]: I0308 03:41:40.350592 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d-memberlist\") pod \"speaker-jhqp7\" (UID: \"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d\") " pod="metallb-system/speaker-jhqp7" Mar 08 03:41:40.493736 master-0 kubenswrapper[33141]: I0308 03:41:40.493669 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhqp7" Mar 08 03:41:40.525774 master-0 kubenswrapper[33141]: W0308 03:41:40.525707 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c3a32e_f5a0_43e2_8bad_7f1b5ec35f1d.slice/crio-9fffea19a80a82bfd69def7dddc2d6a8e32a491f97e6972b1fd9ebdb63f39129 WatchSource:0}: Error finding container 9fffea19a80a82bfd69def7dddc2d6a8e32a491f97e6972b1fd9ebdb63f39129: Status 404 returned error can't find the container with id 9fffea19a80a82bfd69def7dddc2d6a8e32a491f97e6972b1fd9ebdb63f39129 Mar 08 03:41:40.747114 master-0 kubenswrapper[33141]: I0308 03:41:40.746768 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-b6x7j"] Mar 08 03:41:40.750491 master-0 kubenswrapper[33141]: I0308 03:41:40.750281 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" Mar 08 03:41:40.768729 master-0 kubenswrapper[33141]: I0308 03:41:40.764961 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-c9mns"] Mar 08 03:41:40.774398 master-0 kubenswrapper[33141]: I0308 03:41:40.774329 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.776062 master-0 kubenswrapper[33141]: I0308 03:41:40.775763 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 08 03:41:40.790594 master-0 kubenswrapper[33141]: I0308 03:41:40.789951 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-c9mns"] Mar 08 03:41:40.804343 master-0 kubenswrapper[33141]: I0308 03:41:40.802197 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-b6x7j"] Mar 08 03:41:40.825551 master-0 kubenswrapper[33141]: I0308 03:41:40.825511 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9tlm8"] Mar 08 03:41:40.830211 master-0 kubenswrapper[33141]: I0308 03:41:40.830169 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.854498 master-0 kubenswrapper[33141]: I0308 03:41:40.854439 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2qh\" (UniqueName: \"kubernetes.io/projected/56ce4272-f506-4729-a411-d59d530ed5ea-kube-api-access-2w2qh\") pod \"nmstate-metrics-69594cc75-b6x7j\" (UID: \"56ce4272-f506-4729-a411-d59d530ed5ea\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854508 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-nmstate-lock\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854582 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-899ll\" (UniqueName: \"kubernetes.io/projected/49c2416a-c985-49a6-b624-134998684fe6-kube-api-access-899ll\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854606 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fpnb\" (UniqueName: \"kubernetes.io/projected/fe851503-1189-44d9-aaf7-2eb9b9b886a1-kube-api-access-8fpnb\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854627 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-dbus-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854677 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-ovs-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.854719 master-0 kubenswrapper[33141]: I0308 03:41:40.854702 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/49c2416a-c985-49a6-b624-134998684fe6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.951748 master-0 kubenswrapper[33141]: I0308 03:41:40.951710 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p"] Mar 08 03:41:40.955709 master-0 kubenswrapper[33141]: I0308 03:41:40.955686 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/49c2416a-c985-49a6-b624-134998684fe6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.956594 master-0 kubenswrapper[33141]: I0308 03:41:40.955832 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2qh\" (UniqueName: \"kubernetes.io/projected/56ce4272-f506-4729-a411-d59d530ed5ea-kube-api-access-2w2qh\") pod \"nmstate-metrics-69594cc75-b6x7j\" (UID: \"56ce4272-f506-4729-a411-d59d530ed5ea\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" Mar 08 03:41:40.956731 master-0 kubenswrapper[33141]: I0308 03:41:40.956679 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-nmstate-lock\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.956916 master-0 kubenswrapper[33141]: I0308 03:41:40.956873 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-899ll\" (UniqueName: \"kubernetes.io/projected/49c2416a-c985-49a6-b624-134998684fe6-kube-api-access-899ll\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.956979 master-0 kubenswrapper[33141]: I0308 03:41:40.956930 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fpnb\" (UniqueName: \"kubernetes.io/projected/fe851503-1189-44d9-aaf7-2eb9b9b886a1-kube-api-access-8fpnb\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.956979 master-0 kubenswrapper[33141]: I0308 03:41:40.956969 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-dbus-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.957129 master-0 kubenswrapper[33141]: I0308 03:41:40.957105 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-ovs-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.957212 master-0 kubenswrapper[33141]: I0308 03:41:40.957195 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-ovs-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.957251 master-0 kubenswrapper[33141]: I0308 03:41:40.957237 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-nmstate-lock\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.957687 master-0 kubenswrapper[33141]: I0308 03:41:40.957660 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/fe851503-1189-44d9-aaf7-2eb9b9b886a1-dbus-socket\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.976545 master-0 kubenswrapper[33141]: I0308 03:41:40.973488 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p"] Mar 08 03:41:40.976545 master-0 kubenswrapper[33141]: I0308 03:41:40.973770 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:40.977343 master-0 kubenswrapper[33141]: I0308 03:41:40.977268 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/49c2416a-c985-49a6-b624-134998684fe6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:40.977917 master-0 kubenswrapper[33141]: I0308 03:41:40.977884 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 08 03:41:40.978251 master-0 kubenswrapper[33141]: I0308 03:41:40.978238 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 08 03:41:40.978616 master-0 kubenswrapper[33141]: I0308 03:41:40.978548 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2qh\" (UniqueName: \"kubernetes.io/projected/56ce4272-f506-4729-a411-d59d530ed5ea-kube-api-access-2w2qh\") pod \"nmstate-metrics-69594cc75-b6x7j\" (UID: \"56ce4272-f506-4729-a411-d59d530ed5ea\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" Mar 08 03:41:40.985571 master-0 kubenswrapper[33141]: I0308 03:41:40.984694 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fpnb\" (UniqueName: \"kubernetes.io/projected/fe851503-1189-44d9-aaf7-2eb9b9b886a1-kube-api-access-8fpnb\") pod \"nmstate-handler-9tlm8\" (UID: \"fe851503-1189-44d9-aaf7-2eb9b9b886a1\") " pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:40.989458 master-0 kubenswrapper[33141]: I0308 03:41:40.989408 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-899ll\" (UniqueName: \"kubernetes.io/projected/49c2416a-c985-49a6-b624-134998684fe6-kube-api-access-899ll\") pod \"nmstate-webhook-786f45cff4-c9mns\" (UID: \"49c2416a-c985-49a6-b624-134998684fe6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:41.058671 master-0 kubenswrapper[33141]: I0308 03:41:41.058627 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkxjl\" (UniqueName: \"kubernetes.io/projected/c84683bd-71a1-47cf-a335-0954d7e82171-kube-api-access-jkxjl\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.058746 master-0 kubenswrapper[33141]: I0308 03:41:41.058704 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c84683bd-71a1-47cf-a335-0954d7e82171-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.058788 master-0 kubenswrapper[33141]: I0308 03:41:41.058760 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c84683bd-71a1-47cf-a335-0954d7e82171-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.098006 master-0 kubenswrapper[33141]: I0308 03:41:41.097935 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-v6lcp" event={"ID":"78826ab3-1b89-4efe-9986-38e67fc8b8f1","Type":"ContainerStarted","Data":"65bb59d15e7be41b52512cffcd751db03c4704a095d4b546b9c37f558895dd21"} Mar 08 03:41:41.098767 master-0 kubenswrapper[33141]: I0308 03:41:41.098710 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:41.100598 master-0 kubenswrapper[33141]: I0308 03:41:41.100568 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhqp7" event={"ID":"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d","Type":"ContainerStarted","Data":"6282b62bf6e394a6f61c13329b58078e7c032822421f78bfd765684a55c899fc"} Mar 08 03:41:41.100864 master-0 kubenswrapper[33141]: I0308 03:41:41.100765 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhqp7" event={"ID":"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d","Type":"ContainerStarted","Data":"9fffea19a80a82bfd69def7dddc2d6a8e32a491f97e6972b1fd9ebdb63f39129"} Mar 08 03:41:41.160166 master-0 kubenswrapper[33141]: I0308 03:41:41.160114 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkxjl\" (UniqueName: \"kubernetes.io/projected/c84683bd-71a1-47cf-a335-0954d7e82171-kube-api-access-jkxjl\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.160448 master-0 kubenswrapper[33141]: I0308 03:41:41.160430 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c84683bd-71a1-47cf-a335-0954d7e82171-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.160664 master-0 kubenswrapper[33141]: I0308 03:41:41.160643 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c84683bd-71a1-47cf-a335-0954d7e82171-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.161848 master-0 kubenswrapper[33141]: I0308 03:41:41.161833 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c84683bd-71a1-47cf-a335-0954d7e82171-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.164050 master-0 kubenswrapper[33141]: I0308 03:41:41.164006 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/c84683bd-71a1-47cf-a335-0954d7e82171-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.194445 master-0 kubenswrapper[33141]: I0308 03:41:41.194293 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" Mar 08 03:41:41.232556 master-0 kubenswrapper[33141]: I0308 03:41:41.232029 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:41.243996 master-0 kubenswrapper[33141]: I0308 03:41:41.243915 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:41.289670 master-0 kubenswrapper[33141]: W0308 03:41:41.289620 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe851503_1189_44d9_aaf7_2eb9b9b886a1.slice/crio-d41d6191580a00f63ae6037985f3c0b1c9d6aa55bb48cb36cd8abe9eaa6abc2f WatchSource:0}: Error finding container d41d6191580a00f63ae6037985f3c0b1c9d6aa55bb48cb36cd8abe9eaa6abc2f: Status 404 returned error can't find the container with id d41d6191580a00f63ae6037985f3c0b1c9d6aa55bb48cb36cd8abe9eaa6abc2f Mar 08 03:41:41.351426 master-0 kubenswrapper[33141]: I0308 03:41:41.351340 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-v6lcp" podStartSLOduration=2.215895453 podStartE2EDuration="3.351318905s" podCreationTimestamp="2026-03-08 03:41:38 +0000 UTC" firstStartedPulling="2026-03-08 03:41:39.599030637 +0000 UTC m=+613.468923850" lastFinishedPulling="2026-03-08 03:41:40.734454109 +0000 UTC m=+614.604347302" observedRunningTime="2026-03-08 03:41:41.345426541 +0000 UTC m=+615.215319744" watchObservedRunningTime="2026-03-08 03:41:41.351318905 +0000 UTC m=+615.221212108" Mar 08 03:41:41.378627 master-0 kubenswrapper[33141]: I0308 03:41:41.375299 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkxjl\" (UniqueName: \"kubernetes.io/projected/c84683bd-71a1-47cf-a335-0954d7e82171-kube-api-access-jkxjl\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.378627 master-0 kubenswrapper[33141]: I0308 03:41:41.375546 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkxjl\" (UniqueName: \"kubernetes.io/projected/c84683bd-71a1-47cf-a335-0954d7e82171-kube-api-access-jkxjl\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.378627 master-0 kubenswrapper[33141]: I0308 03:41:41.376405 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkxjl\" (UniqueName: \"kubernetes.io/projected/c84683bd-71a1-47cf-a335-0954d7e82171-kube-api-access-jkxjl\") pod \"nmstate-console-plugin-5dcbbd79cf-c7l6p\" (UID: \"c84683bd-71a1-47cf-a335-0954d7e82171\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.499525 master-0 kubenswrapper[33141]: I0308 03:41:41.499394 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-949d7c748-h96bz"] Mar 08 03:41:41.507121 master-0 kubenswrapper[33141]: I0308 03:41:41.505522 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.560337 master-0 kubenswrapper[33141]: I0308 03:41:41.560234 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-949d7c748-h96bz"] Mar 08 03:41:41.635645 master-0 kubenswrapper[33141]: I0308 03:41:41.635467 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-b6x7j"] Mar 08 03:41:41.636025 master-0 kubenswrapper[33141]: I0308 03:41:41.635983 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-oauth-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.636362 master-0 kubenswrapper[33141]: I0308 03:41:41.636339 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hfcn\" (UniqueName: \"kubernetes.io/projected/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-kube-api-access-8hfcn\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.636915 master-0 kubenswrapper[33141]: I0308 03:41:41.636884 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.637119 master-0 kubenswrapper[33141]: I0308 03:41:41.637105 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-trusted-ca-bundle\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.637437 master-0 kubenswrapper[33141]: I0308 03:41:41.637423 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-service-ca\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.637754 master-0 kubenswrapper[33141]: I0308 03:41:41.637726 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.637871 master-0 kubenswrapper[33141]: I0308 03:41:41.637857 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-oauth-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.641635 master-0 kubenswrapper[33141]: W0308 03:41:41.641599 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ce4272_f506_4729_a411_d59d530ed5ea.slice/crio-4a9d9c405a491ed6e8922e9ccb18e0b1f4f568a02dd95aaebcbd5cd65db57aaa WatchSource:0}: Error finding container 4a9d9c405a491ed6e8922e9ccb18e0b1f4f568a02dd95aaebcbd5cd65db57aaa: Status 404 returned error can't find the container with id 4a9d9c405a491ed6e8922e9ccb18e0b1f4f568a02dd95aaebcbd5cd65db57aaa Mar 08 03:41:41.659790 master-0 kubenswrapper[33141]: I0308 03:41:41.659737 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" Mar 08 03:41:41.741390 master-0 kubenswrapper[33141]: I0308 03:41:41.741316 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-oauth-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741390 master-0 kubenswrapper[33141]: I0308 03:41:41.741368 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hfcn\" (UniqueName: \"kubernetes.io/projected/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-kube-api-access-8hfcn\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741681 master-0 kubenswrapper[33141]: I0308 03:41:41.741421 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741681 master-0 kubenswrapper[33141]: I0308 03:41:41.741452 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-trusted-ca-bundle\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741767 master-0 kubenswrapper[33141]: I0308 03:41:41.741751 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-service-ca\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741827 master-0 kubenswrapper[33141]: I0308 03:41:41.741794 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.741827 master-0 kubenswrapper[33141]: I0308 03:41:41.741819 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-oauth-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.742803 master-0 kubenswrapper[33141]: I0308 03:41:41.742724 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-trusted-ca-bundle\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.743038 master-0 kubenswrapper[33141]: I0308 03:41:41.743010 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-oauth-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.744135 master-0 kubenswrapper[33141]: I0308 03:41:41.744086 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-oauth-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.745039 master-0 kubenswrapper[33141]: I0308 03:41:41.744996 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-serving-cert\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.748687 master-0 kubenswrapper[33141]: I0308 03:41:41.748648 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-console-config\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.752361 master-0 kubenswrapper[33141]: I0308 03:41:41.752069 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-service-ca\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.758418 master-0 kubenswrapper[33141]: I0308 03:41:41.758362 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hfcn\" (UniqueName: \"kubernetes.io/projected/7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7-kube-api-access-8hfcn\") pod \"console-949d7c748-h96bz\" (UID: \"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7\") " pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:41.762884 master-0 kubenswrapper[33141]: I0308 03:41:41.762845 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-c9mns"] Mar 08 03:41:41.771077 master-0 kubenswrapper[33141]: W0308 03:41:41.771027 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49c2416a_c985_49a6_b624_134998684fe6.slice/crio-1e0a0988123f71a306ab6e44206ba5db9eba4010762ec76c65e87f1660e1acee WatchSource:0}: Error finding container 1e0a0988123f71a306ab6e44206ba5db9eba4010762ec76c65e87f1660e1acee: Status 404 returned error can't find the container with id 1e0a0988123f71a306ab6e44206ba5db9eba4010762ec76c65e87f1660e1acee Mar 08 03:41:41.854044 master-0 kubenswrapper[33141]: I0308 03:41:41.853869 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:42.104368 master-0 kubenswrapper[33141]: I0308 03:41:42.104327 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p"] Mar 08 03:41:42.108472 master-0 kubenswrapper[33141]: I0308 03:41:42.108445 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9tlm8" event={"ID":"fe851503-1189-44d9-aaf7-2eb9b9b886a1","Type":"ContainerStarted","Data":"d41d6191580a00f63ae6037985f3c0b1c9d6aa55bb48cb36cd8abe9eaa6abc2f"} Mar 08 03:41:42.109934 master-0 kubenswrapper[33141]: I0308 03:41:42.109866 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" event={"ID":"56ce4272-f506-4729-a411-d59d530ed5ea","Type":"ContainerStarted","Data":"4a9d9c405a491ed6e8922e9ccb18e0b1f4f568a02dd95aaebcbd5cd65db57aaa"} Mar 08 03:41:42.112189 master-0 kubenswrapper[33141]: I0308 03:41:42.112141 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhqp7" event={"ID":"b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d","Type":"ContainerStarted","Data":"8c2d8d6945ae76f674c4b5ff24b4640bd7c4cffd321db2789545f4beba91188d"} Mar 08 03:41:42.112494 master-0 kubenswrapper[33141]: I0308 03:41:42.112472 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jhqp7" Mar 08 03:41:42.112975 master-0 kubenswrapper[33141]: W0308 03:41:42.112940 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc84683bd_71a1_47cf_a335_0954d7e82171.slice/crio-ad2fd35bc9e9784b62f26b93a90682894263edd3d4fd8c1b29635a47555122ff WatchSource:0}: Error finding container ad2fd35bc9e9784b62f26b93a90682894263edd3d4fd8c1b29635a47555122ff: Status 404 returned error can't find the container with id ad2fd35bc9e9784b62f26b93a90682894263edd3d4fd8c1b29635a47555122ff Mar 08 03:41:42.113665 master-0 kubenswrapper[33141]: I0308 03:41:42.113633 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" event={"ID":"49c2416a-c985-49a6-b624-134998684fe6","Type":"ContainerStarted","Data":"1e0a0988123f71a306ab6e44206ba5db9eba4010762ec76c65e87f1660e1acee"} Mar 08 03:41:42.142681 master-0 kubenswrapper[33141]: I0308 03:41:42.142395 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jhqp7" podStartSLOduration=4.14231716 podStartE2EDuration="4.14231716s" podCreationTimestamp="2026-03-08 03:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:41:42.131004265 +0000 UTC m=+616.000897458" watchObservedRunningTime="2026-03-08 03:41:42.14231716 +0000 UTC m=+616.012210353" Mar 08 03:41:42.294869 master-0 kubenswrapper[33141]: I0308 03:41:42.294818 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-949d7c748-h96bz"] Mar 08 03:41:43.124571 master-0 kubenswrapper[33141]: I0308 03:41:43.124455 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" event={"ID":"c84683bd-71a1-47cf-a335-0954d7e82171","Type":"ContainerStarted","Data":"ad2fd35bc9e9784b62f26b93a90682894263edd3d4fd8c1b29635a47555122ff"} Mar 08 03:41:43.131668 master-0 kubenswrapper[33141]: I0308 03:41:43.131406 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d7c748-h96bz" event={"ID":"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7","Type":"ContainerStarted","Data":"d9d57b5abfc011ed745ac6d3364ce0db9e542691fd4c7bf10e54bce84994fbdc"} Mar 08 03:41:43.131668 master-0 kubenswrapper[33141]: I0308 03:41:43.131480 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-949d7c748-h96bz" event={"ID":"7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7","Type":"ContainerStarted","Data":"04e9572a121a68a655410671fbc3600af49f200b1ccbf4ea672ae81a7850231d"} Mar 08 03:41:46.386641 master-0 kubenswrapper[33141]: I0308 03:41:46.386527 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-949d7c748-h96bz" podStartSLOduration=5.386504302 podStartE2EDuration="5.386504302s" podCreationTimestamp="2026-03-08 03:41:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:41:43.157229451 +0000 UTC m=+617.027122644" watchObservedRunningTime="2026-03-08 03:41:46.386504302 +0000 UTC m=+620.256397505" Mar 08 03:41:48.201926 master-0 kubenswrapper[33141]: I0308 03:41:48.201203 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" event={"ID":"c84683bd-71a1-47cf-a335-0954d7e82171","Type":"ContainerStarted","Data":"3b9267f1d8643030504bbdf8715b1d68e8386fa01b3d87083ed63b26c61a44f0"} Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.205988 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" event={"ID":"56ce4272-f506-4729-a411-d59d530ed5ea","Type":"ContainerStarted","Data":"11f14ab17642133294e164147da9c0267532417249a778920c37be802b88d524"} Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.206107 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" event={"ID":"56ce4272-f506-4729-a411-d59d530ed5ea","Type":"ContainerStarted","Data":"f5f81e37bb542d2d431df80ae0758804c33747475df86aeb3f2b6e92afe61234"} Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.207518 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" event={"ID":"30678329-c9f2-4958-9b2d-6bacd9250bbe","Type":"ContainerStarted","Data":"ced2d4ca86bed5d85aa2d90dad0c71560de4f6de582f9966556bba0ae195cbe4"} Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.207800 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.211607 33141 generic.go:334] "Generic (PLEG): container finished" podID="c1220927-804a-457f-81bf-e599bac8f203" containerID="e4d5bd5edbceef6181721a6f2641c7cd48f8d233963fab74b4458ec3ba91c205" exitCode=0 Mar 08 03:41:48.213015 master-0 kubenswrapper[33141]: I0308 03:41:48.211663 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerDied","Data":"e4d5bd5edbceef6181721a6f2641c7cd48f8d233963fab74b4458ec3ba91c205"} Mar 08 03:41:48.214225 master-0 kubenswrapper[33141]: I0308 03:41:48.214165 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" event={"ID":"49c2416a-c985-49a6-b624-134998684fe6","Type":"ContainerStarted","Data":"57254ce35a0da4c92d5b9102d7523aa2898803678c345ae4b6a3c136c7e4f1a4"} Mar 08 03:41:48.214844 master-0 kubenswrapper[33141]: I0308 03:41:48.214807 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:41:48.216608 master-0 kubenswrapper[33141]: I0308 03:41:48.216577 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9tlm8" event={"ID":"fe851503-1189-44d9-aaf7-2eb9b9b886a1","Type":"ContainerStarted","Data":"41c2c9f46db8434e2b6dc077c04727b161084cda80b2382dfb665da6f74e283a"} Mar 08 03:41:48.218296 master-0 kubenswrapper[33141]: I0308 03:41:48.218236 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:48.230995 master-0 kubenswrapper[33141]: I0308 03:41:48.230794 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-c7l6p" podStartSLOduration=2.821830079 podStartE2EDuration="8.230765618s" podCreationTimestamp="2026-03-08 03:41:40 +0000 UTC" firstStartedPulling="2026-03-08 03:41:42.117044021 +0000 UTC m=+615.986937214" lastFinishedPulling="2026-03-08 03:41:47.52597952 +0000 UTC m=+621.395872753" observedRunningTime="2026-03-08 03:41:48.223532539 +0000 UTC m=+622.093425732" watchObservedRunningTime="2026-03-08 03:41:48.230765618 +0000 UTC m=+622.100658851" Mar 08 03:41:48.293151 master-0 kubenswrapper[33141]: I0308 03:41:48.293041 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" podStartSLOduration=2.669220312 podStartE2EDuration="8.29301157s" podCreationTimestamp="2026-03-08 03:41:40 +0000 UTC" firstStartedPulling="2026-03-08 03:41:41.773542419 +0000 UTC m=+615.643435612" lastFinishedPulling="2026-03-08 03:41:47.397333677 +0000 UTC m=+621.267226870" observedRunningTime="2026-03-08 03:41:48.283439431 +0000 UTC m=+622.153332674" watchObservedRunningTime="2026-03-08 03:41:48.29301157 +0000 UTC m=+622.162904793" Mar 08 03:41:48.334062 master-0 kubenswrapper[33141]: I0308 03:41:48.333965 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" podStartSLOduration=2.257121659 podStartE2EDuration="10.333947157s" podCreationTimestamp="2026-03-08 03:41:38 +0000 UTC" firstStartedPulling="2026-03-08 03:41:39.292621312 +0000 UTC m=+613.162514505" lastFinishedPulling="2026-03-08 03:41:47.36944681 +0000 UTC m=+621.239340003" observedRunningTime="2026-03-08 03:41:48.320322762 +0000 UTC m=+622.190215955" watchObservedRunningTime="2026-03-08 03:41:48.333947157 +0000 UTC m=+622.203840350" Mar 08 03:41:48.372843 master-0 kubenswrapper[33141]: I0308 03:41:48.372772 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-b6x7j" podStartSLOduration=2.646298055 podStartE2EDuration="8.372738588s" podCreationTimestamp="2026-03-08 03:41:40 +0000 UTC" firstStartedPulling="2026-03-08 03:41:41.642998187 +0000 UTC m=+615.512891380" lastFinishedPulling="2026-03-08 03:41:47.36943871 +0000 UTC m=+621.239331913" observedRunningTime="2026-03-08 03:41:48.349928504 +0000 UTC m=+622.219821707" watchObservedRunningTime="2026-03-08 03:41:48.372738588 +0000 UTC m=+622.242631781" Mar 08 03:41:48.384933 master-0 kubenswrapper[33141]: I0308 03:41:48.381382 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9tlm8" podStartSLOduration=2.304637822 podStartE2EDuration="8.381360913s" podCreationTimestamp="2026-03-08 03:41:40 +0000 UTC" firstStartedPulling="2026-03-08 03:41:41.291737223 +0000 UTC m=+615.161630416" lastFinishedPulling="2026-03-08 03:41:47.368460314 +0000 UTC m=+621.238353507" observedRunningTime="2026-03-08 03:41:48.365232622 +0000 UTC m=+622.235125825" watchObservedRunningTime="2026-03-08 03:41:48.381360913 +0000 UTC m=+622.251254116" Mar 08 03:41:49.017445 master-0 kubenswrapper[33141]: I0308 03:41:49.017377 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-v6lcp" Mar 08 03:41:49.229969 master-0 kubenswrapper[33141]: I0308 03:41:49.229865 33141 generic.go:334] "Generic (PLEG): container finished" podID="c1220927-804a-457f-81bf-e599bac8f203" containerID="b810d602c8b10092767c1b2eb0fa97410fe77378ad236b168d45e028ffdfe3d1" exitCode=0 Mar 08 03:41:49.229969 master-0 kubenswrapper[33141]: I0308 03:41:49.229947 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerDied","Data":"b810d602c8b10092767c1b2eb0fa97410fe77378ad236b168d45e028ffdfe3d1"} Mar 08 03:41:50.250075 master-0 kubenswrapper[33141]: I0308 03:41:50.249990 33141 generic.go:334] "Generic (PLEG): container finished" podID="c1220927-804a-457f-81bf-e599bac8f203" containerID="5678970a2e13c93789e587d1f62691f0763c0d264d2fb881b90fb9f48c9cfb7d" exitCode=0 Mar 08 03:41:50.250934 master-0 kubenswrapper[33141]: I0308 03:41:50.250851 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerDied","Data":"5678970a2e13c93789e587d1f62691f0763c0d264d2fb881b90fb9f48c9cfb7d"} Mar 08 03:41:50.497410 master-0 kubenswrapper[33141]: I0308 03:41:50.497350 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jhqp7" Mar 08 03:41:51.265718 master-0 kubenswrapper[33141]: I0308 03:41:51.265580 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"8447cea8fb74da0b2dd9fc4d91bcee892650b44dc55077e0918cd4ed8d8074bf"} Mar 08 03:41:51.265718 master-0 kubenswrapper[33141]: I0308 03:41:51.265646 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"c1cd04676fdf083deba40c84e0192b6ca6184e41c504addbeddc9434a32643ae"} Mar 08 03:41:51.265718 master-0 kubenswrapper[33141]: I0308 03:41:51.265661 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"5ef9d9ed31dc7d2a6a55ce6bc8ea056efeeaaa3fe6b772243725ee4be6d95b42"} Mar 08 03:41:51.265718 master-0 kubenswrapper[33141]: I0308 03:41:51.265673 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"a45b8707462c436493612caa1d489eaea5a02ca47a59b3c1cbb05e7eeb82989c"} Mar 08 03:41:51.265718 master-0 kubenswrapper[33141]: I0308 03:41:51.265684 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"cd3ff0cb18cccff271475e22a97c1be89d881c45c76202bf6484cd6eb7dca34b"} Mar 08 03:41:51.854948 master-0 kubenswrapper[33141]: I0308 03:41:51.854865 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:51.855444 master-0 kubenswrapper[33141]: I0308 03:41:51.854969 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:51.865753 master-0 kubenswrapper[33141]: I0308 03:41:51.865699 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:52.288653 master-0 kubenswrapper[33141]: I0308 03:41:52.288548 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-mhfnq" event={"ID":"c1220927-804a-457f-81bf-e599bac8f203","Type":"ContainerStarted","Data":"9ceb4f760fb65a407be9559d59cbe0f2612b1b7d3f04dd5b50fb3e102acf889b"} Mar 08 03:41:52.290241 master-0 kubenswrapper[33141]: I0308 03:41:52.290090 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:52.293882 master-0 kubenswrapper[33141]: I0308 03:41:52.293722 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-949d7c748-h96bz" Mar 08 03:41:52.336843 master-0 kubenswrapper[33141]: I0308 03:41:52.336680 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-mhfnq" podStartSLOduration=5.975851904 podStartE2EDuration="14.336644084s" podCreationTimestamp="2026-03-08 03:41:38 +0000 UTC" firstStartedPulling="2026-03-08 03:41:39.008596349 +0000 UTC m=+612.878489562" lastFinishedPulling="2026-03-08 03:41:47.369388549 +0000 UTC m=+621.239281742" observedRunningTime="2026-03-08 03:41:52.330752931 +0000 UTC m=+626.200646174" watchObservedRunningTime="2026-03-08 03:41:52.336644084 +0000 UTC m=+626.206537317" Mar 08 03:41:52.452948 master-0 kubenswrapper[33141]: I0308 03:41:52.444610 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:41:53.887892 master-0 kubenswrapper[33141]: I0308 03:41:53.887830 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:53.921001 master-0 kubenswrapper[33141]: I0308 03:41:53.920900 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:41:56.288818 master-0 kubenswrapper[33141]: I0308 03:41:56.288713 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9tlm8" Mar 08 03:41:58.877830 master-0 kubenswrapper[33141]: I0308 03:41:58.877717 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-hjjnv" Mar 08 03:42:01.239055 master-0 kubenswrapper[33141]: I0308 03:42:01.238990 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-c9mns" Mar 08 03:42:05.900144 master-0 kubenswrapper[33141]: I0308 03:42:05.900041 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-6wwxp"] Mar 08 03:42:05.906809 master-0 kubenswrapper[33141]: I0308 03:42:05.906732 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:05.925162 master-0 kubenswrapper[33141]: I0308 03:42:05.925092 33141 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 08 03:42:06.007079 master-0 kubenswrapper[33141]: I0308 03:42:06.007011 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-6wwxp"] Mar 08 03:42:06.033460 master-0 kubenswrapper[33141]: I0308 03:42:06.033399 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x27j\" (UniqueName: \"kubernetes.io/projected/96fe6f11-1fc7-4887-920b-80ed59b73d66-kube-api-access-4x27j\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.033789 master-0 kubenswrapper[33141]: I0308 03:42:06.033766 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-node-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.033885 master-0 kubenswrapper[33141]: I0308 03:42:06.033870 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-csi-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034031 master-0 kubenswrapper[33141]: I0308 03:42:06.034012 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-device-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034145 master-0 kubenswrapper[33141]: I0308 03:42:06.034128 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-run-udev\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034263 master-0 kubenswrapper[33141]: I0308 03:42:06.034247 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-registration-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034367 master-0 kubenswrapper[33141]: I0308 03:42:06.034350 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-sys\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034495 master-0 kubenswrapper[33141]: I0308 03:42:06.034479 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-lvmd-config\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034611 master-0 kubenswrapper[33141]: I0308 03:42:06.034595 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/96fe6f11-1fc7-4887-920b-80ed59b73d66-metrics-cert\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034729 master-0 kubenswrapper[33141]: I0308 03:42:06.034703 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-file-lock-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.034884 master-0 kubenswrapper[33141]: I0308 03:42:06.034867 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-pod-volumes-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.136469 master-0 kubenswrapper[33141]: I0308 03:42:06.136419 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-lvmd-config\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.137029 master-0 kubenswrapper[33141]: I0308 03:42:06.137014 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/96fe6f11-1fc7-4887-920b-80ed59b73d66-metrics-cert\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.137567 master-0 kubenswrapper[33141]: I0308 03:42:06.137550 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-file-lock-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.137752 master-0 kubenswrapper[33141]: I0308 03:42:06.136961 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-lvmd-config\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.137877 master-0 kubenswrapper[33141]: I0308 03:42:06.137864 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-pod-volumes-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138050 master-0 kubenswrapper[33141]: I0308 03:42:06.138036 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x27j\" (UniqueName: \"kubernetes.io/projected/96fe6f11-1fc7-4887-920b-80ed59b73d66-kube-api-access-4x27j\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138157 master-0 kubenswrapper[33141]: I0308 03:42:06.138144 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-node-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138405 master-0 kubenswrapper[33141]: I0308 03:42:06.138380 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-csi-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138516 master-0 kubenswrapper[33141]: I0308 03:42:06.138503 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-device-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138636 master-0 kubenswrapper[33141]: I0308 03:42:06.138605 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-csi-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138685 master-0 kubenswrapper[33141]: I0308 03:42:06.138626 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-device-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138685 master-0 kubenswrapper[33141]: I0308 03:42:06.137969 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-pod-volumes-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138746 master-0 kubenswrapper[33141]: I0308 03:42:06.138614 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-run-udev\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138795 master-0 kubenswrapper[33141]: I0308 03:42:06.138771 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-registration-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138836 master-0 kubenswrapper[33141]: I0308 03:42:06.138803 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-sys\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138896 master-0 kubenswrapper[33141]: I0308 03:42:06.138881 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-run-udev\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.138976 master-0 kubenswrapper[33141]: I0308 03:42:06.137875 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-file-lock-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.139041 master-0 kubenswrapper[33141]: I0308 03:42:06.138349 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-node-plugin-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.139106 master-0 kubenswrapper[33141]: I0308 03:42:06.139076 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-registration-dir\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.139173 master-0 kubenswrapper[33141]: I0308 03:42:06.139108 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/96fe6f11-1fc7-4887-920b-80ed59b73d66-sys\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.141148 master-0 kubenswrapper[33141]: I0308 03:42:06.141130 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/96fe6f11-1fc7-4887-920b-80ed59b73d66-metrics-cert\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.164042 master-0 kubenswrapper[33141]: I0308 03:42:06.163959 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x27j\" (UniqueName: \"kubernetes.io/projected/96fe6f11-1fc7-4887-920b-80ed59b73d66-kube-api-access-4x27j\") pod \"vg-manager-6wwxp\" (UID: \"96fe6f11-1fc7-4887-920b-80ed59b73d66\") " pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.279806 master-0 kubenswrapper[33141]: I0308 03:42:06.279742 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:06.767363 master-0 kubenswrapper[33141]: I0308 03:42:06.767115 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-6wwxp"] Mar 08 03:42:07.473240 master-0 kubenswrapper[33141]: I0308 03:42:07.473178 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6wwxp" event={"ID":"96fe6f11-1fc7-4887-920b-80ed59b73d66","Type":"ContainerStarted","Data":"e9beb9c068f222f130422ffc6a445e2c901b1a35c36995514f61457d29dc254f"} Mar 08 03:42:07.473240 master-0 kubenswrapper[33141]: I0308 03:42:07.473238 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6wwxp" event={"ID":"96fe6f11-1fc7-4887-920b-80ed59b73d66","Type":"ContainerStarted","Data":"b3f6d6561e4ca7e748fd0840a799ac35d0756e9a4fa505138e16c2de1e7dd75e"} Mar 08 03:42:07.496048 master-0 kubenswrapper[33141]: I0308 03:42:07.493777 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-6wwxp" podStartSLOduration=2.493755449 podStartE2EDuration="2.493755449s" podCreationTimestamp="2026-03-08 03:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:42:07.49072707 +0000 UTC m=+641.360620283" watchObservedRunningTime="2026-03-08 03:42:07.493755449 +0000 UTC m=+641.363648642" Mar 08 03:42:08.890125 master-0 kubenswrapper[33141]: I0308 03:42:08.890020 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-mhfnq" Mar 08 03:42:11.505585 master-0 kubenswrapper[33141]: I0308 03:42:11.505501 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6wwxp_96fe6f11-1fc7-4887-920b-80ed59b73d66/vg-manager/0.log" Mar 08 03:42:11.506132 master-0 kubenswrapper[33141]: I0308 03:42:11.506105 33141 generic.go:334] "Generic (PLEG): container finished" podID="96fe6f11-1fc7-4887-920b-80ed59b73d66" containerID="e9beb9c068f222f130422ffc6a445e2c901b1a35c36995514f61457d29dc254f" exitCode=1 Mar 08 03:42:11.506295 master-0 kubenswrapper[33141]: I0308 03:42:11.506245 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6wwxp" event={"ID":"96fe6f11-1fc7-4887-920b-80ed59b73d66","Type":"ContainerDied","Data":"e9beb9c068f222f130422ffc6a445e2c901b1a35c36995514f61457d29dc254f"} Mar 08 03:42:11.507377 master-0 kubenswrapper[33141]: I0308 03:42:11.507364 33141 scope.go:117] "RemoveContainer" containerID="e9beb9c068f222f130422ffc6a445e2c901b1a35c36995514f61457d29dc254f" Mar 08 03:42:11.843593 master-0 kubenswrapper[33141]: I0308 03:42:11.841954 33141 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 08 03:42:12.519090 master-0 kubenswrapper[33141]: I0308 03:42:12.519016 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6wwxp_96fe6f11-1fc7-4887-920b-80ed59b73d66/vg-manager/0.log" Mar 08 03:42:12.519675 master-0 kubenswrapper[33141]: I0308 03:42:12.519124 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-6wwxp" event={"ID":"96fe6f11-1fc7-4887-920b-80ed59b73d66","Type":"ContainerStarted","Data":"59011d7f4fe2213c26f02ab5c207d4956fc529d508095b82109d965d4a685d5a"} Mar 08 03:42:12.713326 master-0 kubenswrapper[33141]: I0308 03:42:12.713157 33141 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-08T03:42:11.842016284Z","Handler":null,"Name":""} Mar 08 03:42:12.716272 master-0 kubenswrapper[33141]: I0308 03:42:12.716223 33141 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 08 03:42:12.716272 master-0 kubenswrapper[33141]: I0308 03:42:12.716275 33141 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 08 03:42:16.281906 master-0 kubenswrapper[33141]: I0308 03:42:16.280311 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:16.284267 master-0 kubenswrapper[33141]: I0308 03:42:16.284200 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:16.566616 master-0 kubenswrapper[33141]: I0308 03:42:16.566464 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:16.568264 master-0 kubenswrapper[33141]: I0308 03:42:16.568189 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-6wwxp" Mar 08 03:42:17.508287 master-0 kubenswrapper[33141]: I0308 03:42:17.508148 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75d8bd58cb-xqq9p" podUID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" containerName="console" containerID="cri-o://0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864" gracePeriod=15 Mar 08 03:42:18.121443 master-0 kubenswrapper[33141]: I0308 03:42:18.121373 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75d8bd58cb-xqq9p_a5e710d0-27ee-4931-b0d6-1fe5e7e8215d/console/0.log" Mar 08 03:42:18.121644 master-0 kubenswrapper[33141]: I0308 03:42:18.121548 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.286805 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.286968 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.287030 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd8x8\" (UniqueName: \"kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.287082 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.287122 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.287202 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.288945 master-0 kubenswrapper[33141]: I0308 03:42:18.287283 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert\") pod \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\" (UID: \"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d\") " Mar 08 03:42:18.291172 master-0 kubenswrapper[33141]: I0308 03:42:18.291106 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8" (OuterVolumeSpecName: "kube-api-access-nd8x8") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "kube-api-access-nd8x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:42:18.291488 master-0 kubenswrapper[33141]: I0308 03:42:18.291465 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:42:18.291830 master-0 kubenswrapper[33141]: I0308 03:42:18.291779 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca" (OuterVolumeSpecName: "service-ca") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:42:18.292129 master-0 kubenswrapper[33141]: I0308 03:42:18.292097 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:42:18.292425 master-0 kubenswrapper[33141]: I0308 03:42:18.292405 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config" (OuterVolumeSpecName: "console-config") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 03:42:18.299376 master-0 kubenswrapper[33141]: I0308 03:42:18.299170 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:42:18.305998 master-0 kubenswrapper[33141]: I0308 03:42:18.305826 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" (UID: "a5e710d0-27ee-4931-b0d6-1fe5e7e8215d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 03:42:18.389431 master-0 kubenswrapper[33141]: I0308 03:42:18.389358 33141 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389431 master-0 kubenswrapper[33141]: I0308 03:42:18.389425 33141 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389431 master-0 kubenswrapper[33141]: I0308 03:42:18.389440 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd8x8\" (UniqueName: \"kubernetes.io/projected/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-kube-api-access-nd8x8\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389781 master-0 kubenswrapper[33141]: I0308 03:42:18.389453 33141 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389781 master-0 kubenswrapper[33141]: I0308 03:42:18.389466 33141 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389781 master-0 kubenswrapper[33141]: I0308 03:42:18.389477 33141 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.389781 master-0 kubenswrapper[33141]: I0308 03:42:18.389488 33141 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:18.587456 master-0 kubenswrapper[33141]: I0308 03:42:18.587347 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75d8bd58cb-xqq9p_a5e710d0-27ee-4931-b0d6-1fe5e7e8215d/console/0.log" Mar 08 03:42:18.587456 master-0 kubenswrapper[33141]: I0308 03:42:18.587394 33141 generic.go:334] "Generic (PLEG): container finished" podID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" containerID="0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864" exitCode=2 Mar 08 03:42:18.588134 master-0 kubenswrapper[33141]: I0308 03:42:18.588108 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75d8bd58cb-xqq9p" Mar 08 03:42:18.588622 master-0 kubenswrapper[33141]: I0308 03:42:18.588518 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d8bd58cb-xqq9p" event={"ID":"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d","Type":"ContainerDied","Data":"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864"} Mar 08 03:42:18.588622 master-0 kubenswrapper[33141]: I0308 03:42:18.588547 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75d8bd58cb-xqq9p" event={"ID":"a5e710d0-27ee-4931-b0d6-1fe5e7e8215d","Type":"ContainerDied","Data":"88580c37013cf3070eb145942280799e16a04eefa15e2a4e1179b4659a67636d"} Mar 08 03:42:18.588622 master-0 kubenswrapper[33141]: I0308 03:42:18.588571 33141 scope.go:117] "RemoveContainer" containerID="0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864" Mar 08 03:42:18.605298 master-0 kubenswrapper[33141]: I0308 03:42:18.605242 33141 scope.go:117] "RemoveContainer" containerID="0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864" Mar 08 03:42:18.605859 master-0 kubenswrapper[33141]: E0308 03:42:18.605820 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864\": container with ID starting with 0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864 not found: ID does not exist" containerID="0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864" Mar 08 03:42:18.605983 master-0 kubenswrapper[33141]: I0308 03:42:18.605856 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864"} err="failed to get container status \"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864\": rpc error: code = NotFound desc = could not find container \"0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864\": container with ID starting with 0780c1ed6798a631014fca857258a7eb9ea778da9cadfebf0db1635c85c06864 not found: ID does not exist" Mar 08 03:42:18.632561 master-0 kubenswrapper[33141]: I0308 03:42:18.632492 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:42:18.639811 master-0 kubenswrapper[33141]: I0308 03:42:18.639733 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75d8bd58cb-xqq9p"] Mar 08 03:42:18.693150 master-0 kubenswrapper[33141]: I0308 03:42:18.693087 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:18.693472 master-0 kubenswrapper[33141]: E0308 03:42:18.693449 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" containerName="console" Mar 08 03:42:18.693472 master-0 kubenswrapper[33141]: I0308 03:42:18.693470 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" containerName="console" Mar 08 03:42:18.693681 master-0 kubenswrapper[33141]: I0308 03:42:18.693661 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" containerName="console" Mar 08 03:42:18.694203 master-0 kubenswrapper[33141]: I0308 03:42:18.694180 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:18.698952 master-0 kubenswrapper[33141]: I0308 03:42:18.698884 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 08 03:42:18.699173 master-0 kubenswrapper[33141]: I0308 03:42:18.699143 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 08 03:42:18.712264 master-0 kubenswrapper[33141]: I0308 03:42:18.711886 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:18.794047 master-0 kubenswrapper[33141]: I0308 03:42:18.793976 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4ht8\" (UniqueName: \"kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8\") pod \"openstack-operator-index-npfvg\" (UID: \"2c5c76bd-0a76-495e-a433-b3686480e238\") " pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:18.897778 master-0 kubenswrapper[33141]: I0308 03:42:18.896765 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4ht8\" (UniqueName: \"kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8\") pod \"openstack-operator-index-npfvg\" (UID: \"2c5c76bd-0a76-495e-a433-b3686480e238\") " pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:18.923575 master-0 kubenswrapper[33141]: I0308 03:42:18.923536 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4ht8\" (UniqueName: \"kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8\") pod \"openstack-operator-index-npfvg\" (UID: \"2c5c76bd-0a76-495e-a433-b3686480e238\") " pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:19.016642 master-0 kubenswrapper[33141]: I0308 03:42:19.016585 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:19.437936 master-0 kubenswrapper[33141]: I0308 03:42:19.437815 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:19.597086 master-0 kubenswrapper[33141]: I0308 03:42:19.597010 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npfvg" event={"ID":"2c5c76bd-0a76-495e-a433-b3686480e238","Type":"ContainerStarted","Data":"9801ead4cf8fa8e07767786697f8906bf8d0715f9cf4309f9d8c9450ded5d4c9"} Mar 08 03:42:20.365096 master-0 kubenswrapper[33141]: I0308 03:42:20.364997 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e710d0-27ee-4931-b0d6-1fe5e7e8215d" path="/var/lib/kubelet/pods/a5e710d0-27ee-4931-b0d6-1fe5e7e8215d/volumes" Mar 08 03:42:21.621001 master-0 kubenswrapper[33141]: I0308 03:42:21.620924 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npfvg" event={"ID":"2c5c76bd-0a76-495e-a433-b3686480e238","Type":"ContainerStarted","Data":"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9"} Mar 08 03:42:21.657041 master-0 kubenswrapper[33141]: I0308 03:42:21.656892 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-npfvg" podStartSLOduration=2.038389838 podStartE2EDuration="3.656864278s" podCreationTimestamp="2026-03-08 03:42:18 +0000 UTC" firstStartedPulling="2026-03-08 03:42:19.436948263 +0000 UTC m=+653.306841456" lastFinishedPulling="2026-03-08 03:42:21.055422703 +0000 UTC m=+654.925315896" observedRunningTime="2026-03-08 03:42:21.64161955 +0000 UTC m=+655.511512773" watchObservedRunningTime="2026-03-08 03:42:21.656864278 +0000 UTC m=+655.526757481" Mar 08 03:42:22.061984 master-0 kubenswrapper[33141]: I0308 03:42:22.061881 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:23.325978 master-0 kubenswrapper[33141]: I0308 03:42:23.310019 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-972kj"] Mar 08 03:42:23.325978 master-0 kubenswrapper[33141]: I0308 03:42:23.318529 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:23.361373 master-0 kubenswrapper[33141]: I0308 03:42:23.350639 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-972kj"] Mar 08 03:42:23.380605 master-0 kubenswrapper[33141]: I0308 03:42:23.379606 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcbw6\" (UniqueName: \"kubernetes.io/projected/2d0ddc76-ddd0-4c01-af86-b19a6388f2aa-kube-api-access-jcbw6\") pod \"openstack-operator-index-972kj\" (UID: \"2d0ddc76-ddd0-4c01-af86-b19a6388f2aa\") " pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:23.481803 master-0 kubenswrapper[33141]: I0308 03:42:23.481713 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcbw6\" (UniqueName: \"kubernetes.io/projected/2d0ddc76-ddd0-4c01-af86-b19a6388f2aa-kube-api-access-jcbw6\") pod \"openstack-operator-index-972kj\" (UID: \"2d0ddc76-ddd0-4c01-af86-b19a6388f2aa\") " pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:23.500530 master-0 kubenswrapper[33141]: I0308 03:42:23.500474 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcbw6\" (UniqueName: \"kubernetes.io/projected/2d0ddc76-ddd0-4c01-af86-b19a6388f2aa-kube-api-access-jcbw6\") pod \"openstack-operator-index-972kj\" (UID: \"2d0ddc76-ddd0-4c01-af86-b19a6388f2aa\") " pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:23.638826 master-0 kubenswrapper[33141]: I0308 03:42:23.638641 33141 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-npfvg" podUID="2c5c76bd-0a76-495e-a433-b3686480e238" containerName="registry-server" containerID="cri-o://4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9" gracePeriod=2 Mar 08 03:42:23.660669 master-0 kubenswrapper[33141]: I0308 03:42:23.660615 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:24.109235 master-0 kubenswrapper[33141]: I0308 03:42:24.109108 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-972kj"] Mar 08 03:42:24.118356 master-0 kubenswrapper[33141]: W0308 03:42:24.118301 33141 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d0ddc76_ddd0_4c01_af86_b19a6388f2aa.slice/crio-eb840c8594d3740efff019d309161b50ad841daa1734977ac8be2e0894fa70ac WatchSource:0}: Error finding container eb840c8594d3740efff019d309161b50ad841daa1734977ac8be2e0894fa70ac: Status 404 returned error can't find the container with id eb840c8594d3740efff019d309161b50ad841daa1734977ac8be2e0894fa70ac Mar 08 03:42:24.157869 master-0 kubenswrapper[33141]: I0308 03:42:24.157825 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:24.295142 master-0 kubenswrapper[33141]: I0308 03:42:24.295069 33141 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4ht8\" (UniqueName: \"kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8\") pod \"2c5c76bd-0a76-495e-a433-b3686480e238\" (UID: \"2c5c76bd-0a76-495e-a433-b3686480e238\") " Mar 08 03:42:24.298297 master-0 kubenswrapper[33141]: I0308 03:42:24.298252 33141 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8" (OuterVolumeSpecName: "kube-api-access-b4ht8") pod "2c5c76bd-0a76-495e-a433-b3686480e238" (UID: "2c5c76bd-0a76-495e-a433-b3686480e238"). InnerVolumeSpecName "kube-api-access-b4ht8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 03:42:24.397800 master-0 kubenswrapper[33141]: I0308 03:42:24.397728 33141 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4ht8\" (UniqueName: \"kubernetes.io/projected/2c5c76bd-0a76-495e-a433-b3686480e238-kube-api-access-b4ht8\") on node \"master-0\" DevicePath \"\"" Mar 08 03:42:24.658044 master-0 kubenswrapper[33141]: I0308 03:42:24.657975 33141 generic.go:334] "Generic (PLEG): container finished" podID="2c5c76bd-0a76-495e-a433-b3686480e238" containerID="4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9" exitCode=0 Mar 08 03:42:24.658173 master-0 kubenswrapper[33141]: I0308 03:42:24.658110 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npfvg" event={"ID":"2c5c76bd-0a76-495e-a433-b3686480e238","Type":"ContainerDied","Data":"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9"} Mar 08 03:42:24.658173 master-0 kubenswrapper[33141]: I0308 03:42:24.658144 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-npfvg" event={"ID":"2c5c76bd-0a76-495e-a433-b3686480e238","Type":"ContainerDied","Data":"9801ead4cf8fa8e07767786697f8906bf8d0715f9cf4309f9d8c9450ded5d4c9"} Mar 08 03:42:24.658259 master-0 kubenswrapper[33141]: I0308 03:42:24.658191 33141 scope.go:117] "RemoveContainer" containerID="4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9" Mar 08 03:42:24.658259 master-0 kubenswrapper[33141]: I0308 03:42:24.658197 33141 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-npfvg" Mar 08 03:42:24.659823 master-0 kubenswrapper[33141]: I0308 03:42:24.659759 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-972kj" event={"ID":"2d0ddc76-ddd0-4c01-af86-b19a6388f2aa","Type":"ContainerStarted","Data":"eb840c8594d3740efff019d309161b50ad841daa1734977ac8be2e0894fa70ac"} Mar 08 03:42:24.685510 master-0 kubenswrapper[33141]: I0308 03:42:24.685446 33141 scope.go:117] "RemoveContainer" containerID="4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9" Mar 08 03:42:24.686163 master-0 kubenswrapper[33141]: E0308 03:42:24.686109 33141 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9\": container with ID starting with 4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9 not found: ID does not exist" containerID="4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9" Mar 08 03:42:24.686225 master-0 kubenswrapper[33141]: I0308 03:42:24.686181 33141 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9"} err="failed to get container status \"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9\": rpc error: code = NotFound desc = could not find container \"4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9\": container with ID starting with 4961e2cf53af944fd5cf4a3b6966e06989b4a2b6bb819171b0ca4d108b070bd9 not found: ID does not exist" Mar 08 03:42:24.690537 master-0 kubenswrapper[33141]: I0308 03:42:24.690488 33141 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:24.710252 master-0 kubenswrapper[33141]: I0308 03:42:24.705594 33141 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-npfvg"] Mar 08 03:42:25.672894 master-0 kubenswrapper[33141]: I0308 03:42:25.672826 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-972kj" event={"ID":"2d0ddc76-ddd0-4c01-af86-b19a6388f2aa","Type":"ContainerStarted","Data":"7fc0e405f4ec2f6ee0f5f4f9ca2ee128e6d613ce33940909d0d75c3159a2cabc"} Mar 08 03:42:25.715935 master-0 kubenswrapper[33141]: I0308 03:42:25.714502 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-972kj" podStartSLOduration=3.258717519 podStartE2EDuration="3.714474667s" podCreationTimestamp="2026-03-08 03:42:22 +0000 UTC" firstStartedPulling="2026-03-08 03:42:24.122880257 +0000 UTC m=+657.992773440" lastFinishedPulling="2026-03-08 03:42:24.578637365 +0000 UTC m=+658.448530588" observedRunningTime="2026-03-08 03:42:25.703698316 +0000 UTC m=+659.573591529" watchObservedRunningTime="2026-03-08 03:42:25.714474667 +0000 UTC m=+659.584367910" Mar 08 03:42:26.372145 master-0 kubenswrapper[33141]: I0308 03:42:26.372064 33141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5c76bd-0a76-495e-a433-b3686480e238" path="/var/lib/kubelet/pods/2c5c76bd-0a76-495e-a433-b3686480e238/volumes" Mar 08 03:42:33.661619 master-0 kubenswrapper[33141]: I0308 03:42:33.661525 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:33.661619 master-0 kubenswrapper[33141]: I0308 03:42:33.661576 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:33.704855 master-0 kubenswrapper[33141]: I0308 03:42:33.704759 33141 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:42:33.825697 master-0 kubenswrapper[33141]: I0308 03:42:33.825644 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-972kj" Mar 08 03:47:30.289121 master-0 kubenswrapper[33141]: I0308 03:47:30.288974 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4v6c/must-gather-tpgmg"] Mar 08 03:47:30.289775 master-0 kubenswrapper[33141]: E0308 03:47:30.289471 33141 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5c76bd-0a76-495e-a433-b3686480e238" containerName="registry-server" Mar 08 03:47:30.289775 master-0 kubenswrapper[33141]: I0308 03:47:30.289496 33141 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5c76bd-0a76-495e-a433-b3686480e238" containerName="registry-server" Mar 08 03:47:30.289851 master-0 kubenswrapper[33141]: I0308 03:47:30.289787 33141 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5c76bd-0a76-495e-a433-b3686480e238" containerName="registry-server" Mar 08 03:47:30.291095 master-0 kubenswrapper[33141]: I0308 03:47:30.291063 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.293853 master-0 kubenswrapper[33141]: I0308 03:47:30.293807 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4v6c"/"openshift-service-ca.crt" Mar 08 03:47:30.294757 master-0 kubenswrapper[33141]: I0308 03:47:30.294718 33141 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4v6c"/"kube-root-ca.crt" Mar 08 03:47:30.314021 master-0 kubenswrapper[33141]: I0308 03:47:30.311182 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4v6c/must-gather-sbd6j"] Mar 08 03:47:30.314021 master-0 kubenswrapper[33141]: I0308 03:47:30.313230 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.333271 master-0 kubenswrapper[33141]: I0308 03:47:30.333195 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/must-gather-tpgmg"] Mar 08 03:47:30.344768 master-0 kubenswrapper[33141]: I0308 03:47:30.343983 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/must-gather-sbd6j"] Mar 08 03:47:30.375565 master-0 kubenswrapper[33141]: I0308 03:47:30.367738 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/411cad81-3cd1-4720-9522-3b1fb8a44c5b-must-gather-output\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.375565 master-0 kubenswrapper[33141]: I0308 03:47:30.367875 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsr4\" (UniqueName: \"kubernetes.io/projected/411cad81-3cd1-4720-9522-3b1fb8a44c5b-kube-api-access-rnsr4\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.375565 master-0 kubenswrapper[33141]: I0308 03:47:30.368019 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf6pg\" (UniqueName: \"kubernetes.io/projected/07ed016a-93d7-4930-95a6-706f8f233c5d-kube-api-access-hf6pg\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.375565 master-0 kubenswrapper[33141]: I0308 03:47:30.368082 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07ed016a-93d7-4930-95a6-706f8f233c5d-must-gather-output\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.468944 master-0 kubenswrapper[33141]: I0308 03:47:30.468852 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsr4\" (UniqueName: \"kubernetes.io/projected/411cad81-3cd1-4720-9522-3b1fb8a44c5b-kube-api-access-rnsr4\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.468944 master-0 kubenswrapper[33141]: I0308 03:47:30.468937 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf6pg\" (UniqueName: \"kubernetes.io/projected/07ed016a-93d7-4930-95a6-706f8f233c5d-kube-api-access-hf6pg\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.469257 master-0 kubenswrapper[33141]: I0308 03:47:30.468979 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07ed016a-93d7-4930-95a6-706f8f233c5d-must-gather-output\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.469257 master-0 kubenswrapper[33141]: I0308 03:47:30.469087 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/411cad81-3cd1-4720-9522-3b1fb8a44c5b-must-gather-output\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.469604 master-0 kubenswrapper[33141]: I0308 03:47:30.469563 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07ed016a-93d7-4930-95a6-706f8f233c5d-must-gather-output\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.469604 master-0 kubenswrapper[33141]: I0308 03:47:30.469593 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/411cad81-3cd1-4720-9522-3b1fb8a44c5b-must-gather-output\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.483899 master-0 kubenswrapper[33141]: I0308 03:47:30.483845 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf6pg\" (UniqueName: \"kubernetes.io/projected/07ed016a-93d7-4930-95a6-706f8f233c5d-kube-api-access-hf6pg\") pod \"must-gather-tpgmg\" (UID: \"07ed016a-93d7-4930-95a6-706f8f233c5d\") " pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.488453 master-0 kubenswrapper[33141]: I0308 03:47:30.488418 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsr4\" (UniqueName: \"kubernetes.io/projected/411cad81-3cd1-4720-9522-3b1fb8a44c5b-kube-api-access-rnsr4\") pod \"must-gather-sbd6j\" (UID: \"411cad81-3cd1-4720-9522-3b1fb8a44c5b\") " pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:30.646190 master-0 kubenswrapper[33141]: I0308 03:47:30.646055 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" Mar 08 03:47:30.669521 master-0 kubenswrapper[33141]: I0308 03:47:30.669456 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" Mar 08 03:47:31.106357 master-0 kubenswrapper[33141]: I0308 03:47:31.106277 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/must-gather-sbd6j"] Mar 08 03:47:31.113081 master-0 kubenswrapper[33141]: I0308 03:47:31.113001 33141 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 03:47:31.179187 master-0 kubenswrapper[33141]: I0308 03:47:31.179122 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/must-gather-tpgmg"] Mar 08 03:47:31.254706 master-0 kubenswrapper[33141]: I0308 03:47:31.254628 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" event={"ID":"07ed016a-93d7-4930-95a6-706f8f233c5d","Type":"ContainerStarted","Data":"d6b0416367314835849d8af1ac318a7f614bc4c26f29af69362394d64727c9d0"} Mar 08 03:47:31.255695 master-0 kubenswrapper[33141]: I0308 03:47:31.255650 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" event={"ID":"411cad81-3cd1-4720-9522-3b1fb8a44c5b","Type":"ContainerStarted","Data":"ff50060970d81de7f4ea00d47e142037e0a936e4b4723c041cf2466ed37c31ef"} Mar 08 03:47:33.104399 master-0 kubenswrapper[33141]: I0308 03:47:33.104352 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-mhw86_d2a53f3b-7e22-47eb-9f28-da3441b3662f/cluster-version-operator/0.log" Mar 08 03:47:33.279656 master-0 kubenswrapper[33141]: I0308 03:47:33.279590 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" event={"ID":"07ed016a-93d7-4930-95a6-706f8f233c5d","Type":"ContainerStarted","Data":"016c1b1c2b479a8648a85b167ce08870945e686589657421777288ee5bd68ea2"} Mar 08 03:47:33.279656 master-0 kubenswrapper[33141]: I0308 03:47:33.279646 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" event={"ID":"07ed016a-93d7-4930-95a6-706f8f233c5d","Type":"ContainerStarted","Data":"1d9200ebc3e5e3bb8141e72f5c618c9af6d817eb2a465301b6de252a125f924c"} Mar 08 03:47:33.303220 master-0 kubenswrapper[33141]: I0308 03:47:33.300430 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4v6c/must-gather-tpgmg" podStartSLOduration=2.104294874 podStartE2EDuration="3.300403308s" podCreationTimestamp="2026-03-08 03:47:30 +0000 UTC" firstStartedPulling="2026-03-08 03:47:31.183500783 +0000 UTC m=+965.053394016" lastFinishedPulling="2026-03-08 03:47:32.379609257 +0000 UTC m=+966.249502450" observedRunningTime="2026-03-08 03:47:33.300166202 +0000 UTC m=+967.170059395" watchObservedRunningTime="2026-03-08 03:47:33.300403308 +0000 UTC m=+967.170296501" Mar 08 03:47:35.669456 master-0 kubenswrapper[33141]: I0308 03:47:35.669399 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-mhw86_d2a53f3b-7e22-47eb-9f28-da3441b3662f/cluster-version-operator/1.log" Mar 08 03:47:37.439502 master-0 kubenswrapper[33141]: I0308 03:47:37.439382 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/controller/0.log" Mar 08 03:47:37.450222 master-0 kubenswrapper[33141]: I0308 03:47:37.450167 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/kube-rbac-proxy/0.log" Mar 08 03:47:37.590370 master-0 kubenswrapper[33141]: I0308 03:47:37.590331 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/controller/0.log" Mar 08 03:47:37.637491 master-0 kubenswrapper[33141]: I0308 03:47:37.637439 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr/0.log" Mar 08 03:47:37.651295 master-0 kubenswrapper[33141]: I0308 03:47:37.651250 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/reloader/0.log" Mar 08 03:47:37.668118 master-0 kubenswrapper[33141]: I0308 03:47:37.667964 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr-metrics/0.log" Mar 08 03:47:37.682230 master-0 kubenswrapper[33141]: I0308 03:47:37.682123 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-c7l6p_c84683bd-71a1-47cf-a335-0954d7e82171/nmstate-console-plugin/0.log" Mar 08 03:47:37.691071 master-0 kubenswrapper[33141]: I0308 03:47:37.688149 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy/0.log" Mar 08 03:47:37.706993 master-0 kubenswrapper[33141]: I0308 03:47:37.705784 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy-frr/0.log" Mar 08 03:47:37.706993 master-0 kubenswrapper[33141]: I0308 03:47:37.706196 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9tlm8_fe851503-1189-44d9-aaf7-2eb9b9b886a1/nmstate-handler/0.log" Mar 08 03:47:37.721197 master-0 kubenswrapper[33141]: I0308 03:47:37.717643 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/nmstate-metrics/0.log" Mar 08 03:47:37.721197 master-0 kubenswrapper[33141]: I0308 03:47:37.717789 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-frr-files/0.log" Mar 08 03:47:37.733056 master-0 kubenswrapper[33141]: I0308 03:47:37.732357 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-reloader/0.log" Mar 08 03:47:37.733451 master-0 kubenswrapper[33141]: I0308 03:47:37.733390 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/kube-rbac-proxy/0.log" Mar 08 03:47:37.748199 master-0 kubenswrapper[33141]: I0308 03:47:37.747686 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-metrics/0.log" Mar 08 03:47:37.750825 master-0 kubenswrapper[33141]: I0308 03:47:37.750443 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-4rskc_fd3b4005-3ca5-4d51-b08e-0a71545c2990/nmstate-operator/0.log" Mar 08 03:47:37.759483 master-0 kubenswrapper[33141]: I0308 03:47:37.759106 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-hjjnv_30678329-c9f2-4958-9b2d-6bacd9250bbe/frr-k8s-webhook-server/0.log" Mar 08 03:47:37.777361 master-0 kubenswrapper[33141]: I0308 03:47:37.777321 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-c9mns_49c2416a-c985-49a6-b624-134998684fe6/nmstate-webhook/0.log" Mar 08 03:47:37.863115 master-0 kubenswrapper[33141]: I0308 03:47:37.858183 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76b695cc4b-p6jt4_510c4395-781d-48ea-b253-247bc7bcc3f4/manager/0.log" Mar 08 03:47:37.885912 master-0 kubenswrapper[33141]: I0308 03:47:37.885860 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58cf648889-6c6hf_60613c6d-80bd-4b7c-9560-69b983dd71df/webhook-server/0.log" Mar 08 03:47:37.975215 master-0 kubenswrapper[33141]: I0308 03:47:37.975182 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/speaker/0.log" Mar 08 03:47:37.984493 master-0 kubenswrapper[33141]: I0308 03:47:37.983822 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/kube-rbac-proxy/0.log" Mar 08 03:47:40.023164 master-0 kubenswrapper[33141]: I0308 03:47:40.023129 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 08 03:47:40.078305 master-0 kubenswrapper[33141]: I0308 03:47:40.078177 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 08 03:47:40.095018 master-0 kubenswrapper[33141]: I0308 03:47:40.094104 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 08 03:47:40.108008 master-0 kubenswrapper[33141]: I0308 03:47:40.107193 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 08 03:47:40.124408 master-0 kubenswrapper[33141]: I0308 03:47:40.124374 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 08 03:47:40.150622 master-0 kubenswrapper[33141]: I0308 03:47:40.149853 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 08 03:47:40.170090 master-0 kubenswrapper[33141]: I0308 03:47:40.170041 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 08 03:47:40.185720 master-0 kubenswrapper[33141]: I0308 03:47:40.185677 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 08 03:47:40.229018 master-0 kubenswrapper[33141]: I0308 03:47:40.228543 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_ed2e0194-6b50-4478-aba4-21193d2c18aa/installer/0.log" Mar 08 03:47:40.277330 master-0 kubenswrapper[33141]: I0308 03:47:40.277067 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_3c20b192-755d-46cd-ab12-2e823b92222e/installer/0.log" Mar 08 03:47:40.494182 master-0 kubenswrapper[33141]: I0308 03:47:40.494119 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-695cdc494-nz9mf_5dccd938-f89c-48f9-aa32-761b3dead193/oauth-openshift/0.log" Mar 08 03:47:41.442131 master-0 kubenswrapper[33141]: I0308 03:47:41.442085 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-rtvl6_f5e953eb-2d1d-4d67-969b-bdecc69b61f0/assisted-installer-controller/0.log" Mar 08 03:47:41.575878 master-0 kubenswrapper[33141]: I0308 03:47:41.575826 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/3.log" Mar 08 03:47:41.602263 master-0 kubenswrapper[33141]: I0308 03:47:41.602219 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-k8xgg_90ef7c0a-7c6f-45aa-865d-1e247110b265/authentication-operator/4.log" Mar 08 03:47:42.325181 master-0 kubenswrapper[33141]: I0308 03:47:42.322360 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-tkxj9_e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/router/2.log" Mar 08 03:47:42.325396 master-0 kubenswrapper[33141]: I0308 03:47:42.325188 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-tkxj9_e878dbfe-0ef8-4ee1-a8b9-3bea56ec449d/router/1.log" Mar 08 03:47:42.406271 master-0 kubenswrapper[33141]: I0308 03:47:42.406145 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" event={"ID":"411cad81-3cd1-4720-9522-3b1fb8a44c5b","Type":"ContainerStarted","Data":"9d500fd115768aad9a44a61e5a32cb8247b7ee53081650567bf1beaf651dfc05"} Mar 08 03:47:42.879884 master-0 kubenswrapper[33141]: I0308 03:47:42.879815 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/oauth-apiserver/0.log" Mar 08 03:47:42.891981 master-0 kubenswrapper[33141]: I0308 03:47:42.891005 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-7b545788fb-82rjl_3a2a141d-a4c3-4b6c-a90b-d184f61a14dd/fix-audit-permissions/0.log" Mar 08 03:47:43.384576 master-0 kubenswrapper[33141]: I0308 03:47:43.384522 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/kube-rbac-proxy/0.log" Mar 08 03:47:43.410786 master-0 kubenswrapper[33141]: I0308 03:47:43.410742 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/0.log" Mar 08 03:47:43.415312 master-0 kubenswrapper[33141]: I0308 03:47:43.415256 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" event={"ID":"411cad81-3cd1-4720-9522-3b1fb8a44c5b","Type":"ContainerStarted","Data":"153627e04cd31b2c9e751dce9586081b2d95d075c964e945b9881835ad82ed99"} Mar 08 03:47:43.415986 master-0 kubenswrapper[33141]: I0308 03:47:43.415850 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/1.log" Mar 08 03:47:43.437070 master-0 kubenswrapper[33141]: I0308 03:47:43.437015 33141 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8"] Mar 08 03:47:43.438135 master-0 kubenswrapper[33141]: I0308 03:47:43.438111 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.447960 master-0 kubenswrapper[33141]: I0308 03:47:43.447892 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8"] Mar 08 03:47:43.462436 master-0 kubenswrapper[33141]: I0308 03:47:43.462372 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4v6c/must-gather-sbd6j" podStartSLOduration=2.450037709 podStartE2EDuration="13.46235376s" podCreationTimestamp="2026-03-08 03:47:30 +0000 UTC" firstStartedPulling="2026-03-08 03:47:31.112937428 +0000 UTC m=+964.982830661" lastFinishedPulling="2026-03-08 03:47:42.125253519 +0000 UTC m=+975.995146712" observedRunningTime="2026-03-08 03:47:43.456404587 +0000 UTC m=+977.326297780" watchObservedRunningTime="2026-03-08 03:47:43.46235376 +0000 UTC m=+977.332246953" Mar 08 03:47:43.463869 master-0 kubenswrapper[33141]: I0308 03:47:43.463496 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:47:43.465678 master-0 kubenswrapper[33141]: I0308 03:47:43.464241 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/2.log" Mar 08 03:47:43.484295 master-0 kubenswrapper[33141]: I0308 03:47:43.484243 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/baremetal-kube-rbac-proxy/0.log" Mar 08 03:47:43.498795 master-0 kubenswrapper[33141]: I0308 03:47:43.498757 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/0.log" Mar 08 03:47:43.499180 master-0 kubenswrapper[33141]: I0308 03:47:43.499033 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/1.log" Mar 08 03:47:43.513474 master-0 kubenswrapper[33141]: I0308 03:47:43.513231 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/kube-rbac-proxy/0.log" Mar 08 03:47:43.527234 master-0 kubenswrapper[33141]: I0308 03:47:43.527185 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/0.log" Mar 08 03:47:43.528752 master-0 kubenswrapper[33141]: I0308 03:47:43.528690 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/1.log" Mar 08 03:47:43.565900 master-0 kubenswrapper[33141]: I0308 03:47:43.565845 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8m6r\" (UniqueName: \"kubernetes.io/projected/7ec11d45-bb14-42da-aac9-215b780a96f9-kube-api-access-z8m6r\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.566116 master-0 kubenswrapper[33141]: I0308 03:47:43.565946 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-sys\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.566116 master-0 kubenswrapper[33141]: I0308 03:47:43.566016 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-lib-modules\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.566182 master-0 kubenswrapper[33141]: I0308 03:47:43.566133 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-proc\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.566220 master-0 kubenswrapper[33141]: I0308 03:47:43.566201 33141 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-podres\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.668712 master-0 kubenswrapper[33141]: I0308 03:47:43.668592 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8m6r\" (UniqueName: \"kubernetes.io/projected/7ec11d45-bb14-42da-aac9-215b780a96f9-kube-api-access-z8m6r\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.668865 master-0 kubenswrapper[33141]: I0308 03:47:43.668828 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-sys\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.668971 master-0 kubenswrapper[33141]: I0308 03:47:43.668950 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-lib-modules\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669031 master-0 kubenswrapper[33141]: I0308 03:47:43.669010 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-proc\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669067 master-0 kubenswrapper[33141]: I0308 03:47:43.669046 33141 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-podres\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669099 master-0 kubenswrapper[33141]: I0308 03:47:43.669054 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-sys\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669132 master-0 kubenswrapper[33141]: I0308 03:47:43.669122 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-lib-modules\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669165 master-0 kubenswrapper[33141]: I0308 03:47:43.669156 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-proc\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.669220 master-0 kubenswrapper[33141]: I0308 03:47:43.669197 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/7ec11d45-bb14-42da-aac9-215b780a96f9-podres\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.685955 master-0 kubenswrapper[33141]: I0308 03:47:43.685784 33141 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8m6r\" (UniqueName: \"kubernetes.io/projected/7ec11d45-bb14-42da-aac9-215b780a96f9-kube-api-access-z8m6r\") pod \"perf-node-gather-daemonset-5pxl8\" (UID: \"7ec11d45-bb14-42da-aac9-215b780a96f9\") " pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:43.765990 master-0 kubenswrapper[33141]: I0308 03:47:43.765932 33141 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:44.283957 master-0 kubenswrapper[33141]: I0308 03:47:44.283406 33141 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8"] Mar 08 03:47:44.431527 master-0 kubenswrapper[33141]: I0308 03:47:44.431478 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" event={"ID":"7ec11d45-bb14-42da-aac9-215b780a96f9","Type":"ContainerStarted","Data":"116f182ee6a07437e54645abac7b68bd05a9a13d6e0f0fd200452c8b9e311423"} Mar 08 03:47:44.528126 master-0 kubenswrapper[33141]: I0308 03:47:44.528068 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc_e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/cluster-cloud-controller-manager/0.log" Mar 08 03:47:44.542725 master-0 kubenswrapper[33141]: I0308 03:47:44.542619 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc_e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/config-sync-controllers/0.log" Mar 08 03:47:44.555406 master-0 kubenswrapper[33141]: I0308 03:47:44.555352 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-qf4bc_e6b0b3cc-969f-495b-bf1f-bdf1f4f086ff/kube-rbac-proxy/0.log" Mar 08 03:47:45.439401 master-0 kubenswrapper[33141]: I0308 03:47:45.439344 33141 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" event={"ID":"7ec11d45-bb14-42da-aac9-215b780a96f9","Type":"ContainerStarted","Data":"5eb300636b3241d458f8334011609b48b949308d95aad8ee0bb67619b1922ef2"} Mar 08 03:47:45.439945 master-0 kubenswrapper[33141]: I0308 03:47:45.439478 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:45.496749 master-0 kubenswrapper[33141]: I0308 03:47:45.496651 33141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" podStartSLOduration=2.49662664 podStartE2EDuration="2.49662664s" podCreationTimestamp="2026-03-08 03:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 03:47:45.489012534 +0000 UTC m=+979.358905747" watchObservedRunningTime="2026-03-08 03:47:45.49662664 +0000 UTC m=+979.366519833" Mar 08 03:47:45.870519 master-0 kubenswrapper[33141]: I0308 03:47:45.870469 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-9hjss_38287d1a-b784-4ce9-9650-949d92469519/kube-rbac-proxy/0.log" Mar 08 03:47:45.903297 master-0 kubenswrapper[33141]: I0308 03:47:45.903221 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-9hjss_38287d1a-b784-4ce9-9650-949d92469519/cloud-credential-operator/0.log" Mar 08 03:47:47.157565 master-0 kubenswrapper[33141]: I0308 03:47:47.157427 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/4.log" Mar 08 03:47:47.158448 master-0 kubenswrapper[33141]: I0308 03:47:47.158406 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-config-operator/5.log" Mar 08 03:47:47.169398 master-0 kubenswrapper[33141]: I0308 03:47:47.169353 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-d4wnv_bd1bcaff-7dbd-4559-92fc-5453993f643e/openshift-api/0.log" Mar 08 03:47:47.783174 master-0 kubenswrapper[33141]: I0308 03:47:47.783120 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-2cw9v_456484f6-a19b-49f9-863b-f76e6f0c8c8f/console-operator/0.log" Mar 08 03:47:48.269291 master-0 kubenswrapper[33141]: I0308 03:47:48.269236 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-949d7c748-h96bz_7cfd0d69-d30b-4a5b-9c3d-b2c987352fc7/console/0.log" Mar 08 03:47:48.296041 master-0 kubenswrapper[33141]: I0308 03:47:48.295980 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-84f57b9877-mnlxs_ffa263f5-3916-48bc-80f1-3f5aad28c9f9/download-server/0.log" Mar 08 03:47:48.399133 master-0 kubenswrapper[33141]: I0308 03:47:48.399079 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/controller/0.log" Mar 08 03:47:48.404660 master-0 kubenswrapper[33141]: I0308 03:47:48.404616 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/kube-rbac-proxy/0.log" Mar 08 03:47:48.423186 master-0 kubenswrapper[33141]: I0308 03:47:48.423140 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/controller/0.log" Mar 08 03:47:48.470228 master-0 kubenswrapper[33141]: I0308 03:47:48.470168 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr/0.log" Mar 08 03:47:48.479335 master-0 kubenswrapper[33141]: I0308 03:47:48.479295 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/reloader/0.log" Mar 08 03:47:48.484155 master-0 kubenswrapper[33141]: I0308 03:47:48.484125 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr-metrics/0.log" Mar 08 03:47:48.506604 master-0 kubenswrapper[33141]: I0308 03:47:48.506314 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy/0.log" Mar 08 03:47:48.513399 master-0 kubenswrapper[33141]: I0308 03:47:48.513366 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy-frr/0.log" Mar 08 03:47:48.521334 master-0 kubenswrapper[33141]: I0308 03:47:48.521251 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-frr-files/0.log" Mar 08 03:47:48.526364 master-0 kubenswrapper[33141]: I0308 03:47:48.526333 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-reloader/0.log" Mar 08 03:47:48.531831 master-0 kubenswrapper[33141]: I0308 03:47:48.531795 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-metrics/0.log" Mar 08 03:47:48.542969 master-0 kubenswrapper[33141]: I0308 03:47:48.542928 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-hjjnv_30678329-c9f2-4958-9b2d-6bacd9250bbe/frr-k8s-webhook-server/0.log" Mar 08 03:47:48.567052 master-0 kubenswrapper[33141]: I0308 03:47:48.567006 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76b695cc4b-p6jt4_510c4395-781d-48ea-b253-247bc7bcc3f4/manager/0.log" Mar 08 03:47:48.582234 master-0 kubenswrapper[33141]: I0308 03:47:48.582185 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58cf648889-6c6hf_60613c6d-80bd-4b7c-9560-69b983dd71df/webhook-server/0.log" Mar 08 03:47:48.650719 master-0 kubenswrapper[33141]: I0308 03:47:48.649851 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/speaker/0.log" Mar 08 03:47:48.655707 master-0 kubenswrapper[33141]: I0308 03:47:48.655663 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/kube-rbac-proxy/0.log" Mar 08 03:47:48.915343 master-0 kubenswrapper[33141]: I0308 03:47:48.915222 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-vw4v4_965f8eef-c5af-499b-b1db-cf63072781cc/cluster-storage-operator/0.log" Mar 08 03:47:48.915770 master-0 kubenswrapper[33141]: I0308 03:47:48.915740 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-vw4v4_965f8eef-c5af-499b-b1db-cf63072781cc/cluster-storage-operator/1.log" Mar 08 03:47:48.930990 master-0 kubenswrapper[33141]: I0308 03:47:48.930934 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/5.log" Mar 08 03:47:48.931452 master-0 kubenswrapper[33141]: I0308 03:47:48.931411 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-kfmd9_9fb588a9-6240-4513-8e4b-248eb43d3f06/snapshot-controller/6.log" Mar 08 03:47:48.950156 master-0 kubenswrapper[33141]: I0308 03:47:48.950107 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/2.log" Mar 08 03:47:48.950972 master-0 kubenswrapper[33141]: I0308 03:47:48.950892 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-xbrdp_3d69f101-60a8-41fd-bcda-4eb654c626a2/csi-snapshot-controller-operator/3.log" Mar 08 03:47:49.544405 master-0 kubenswrapper[33141]: I0308 03:47:49.544342 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-9mhwc_ef16d7ae-66aa-45d4-b1a6-1327738a46bb/dns-operator/0.log" Mar 08 03:47:49.557882 master-0 kubenswrapper[33141]: I0308 03:47:49.557822 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-9mhwc_ef16d7ae-66aa-45d4-b1a6-1327738a46bb/kube-rbac-proxy/0.log" Mar 08 03:47:49.983184 master-0 kubenswrapper[33141]: I0308 03:47:49.983077 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-p6kjc_9b090750-b893-42fe-8def-dfb3f4253d43/dns/0.log" Mar 08 03:47:50.001092 master-0 kubenswrapper[33141]: I0308 03:47:50.001037 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-p6kjc_9b090750-b893-42fe-8def-dfb3f4253d43/kube-rbac-proxy/0.log" Mar 08 03:47:50.024515 master-0 kubenswrapper[33141]: I0308 03:47:50.024459 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-mps4n_f520fbf8-9403-46bc-9381-226a3a1ed1c7/dns-node-resolver/0.log" Mar 08 03:47:50.302119 master-0 kubenswrapper[33141]: I0308 03:47:50.302074 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-972kj_2d0ddc76-ddd0-4c01-af86-b19a6388f2aa/registry-server/0.log" Mar 08 03:47:50.570988 master-0 kubenswrapper[33141]: I0308 03:47:50.568608 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/5.log" Mar 08 03:47:50.571467 master-0 kubenswrapper[33141]: I0308 03:47:50.571385 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-dn4ll_c6e4afd0-fbcd-49c7-9132-b54c9c28b74b/etcd-operator/4.log" Mar 08 03:47:51.048185 master-0 kubenswrapper[33141]: I0308 03:47:51.048149 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 08 03:47:51.105707 master-0 kubenswrapper[33141]: I0308 03:47:51.105650 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 08 03:47:51.117298 master-0 kubenswrapper[33141]: I0308 03:47:51.117227 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 08 03:47:51.127778 master-0 kubenswrapper[33141]: I0308 03:47:51.127742 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 08 03:47:51.139511 master-0 kubenswrapper[33141]: I0308 03:47:51.139466 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 08 03:47:51.153340 master-0 kubenswrapper[33141]: I0308 03:47:51.153281 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 08 03:47:51.167078 master-0 kubenswrapper[33141]: I0308 03:47:51.167037 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 08 03:47:51.181989 master-0 kubenswrapper[33141]: I0308 03:47:51.181946 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 08 03:47:51.228103 master-0 kubenswrapper[33141]: I0308 03:47:51.228048 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_ed2e0194-6b50-4478-aba4-21193d2c18aa/installer/0.log" Mar 08 03:47:51.265745 master-0 kubenswrapper[33141]: I0308 03:47:51.265679 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_3c20b192-755d-46cd-ab12-2e823b92222e/installer/0.log" Mar 08 03:47:51.864937 master-0 kubenswrapper[33141]: I0308 03:47:51.864861 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-brfnq_d82cf0db-0891-482d-856b-1675843042dd/cluster-image-registry-operator/0.log" Mar 08 03:47:51.885789 master-0 kubenswrapper[33141]: I0308 03:47:51.885733 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-brfnq_d82cf0db-0891-482d-856b-1675843042dd/cluster-image-registry-operator/1.log" Mar 08 03:47:51.898690 master-0 kubenswrapper[33141]: I0308 03:47:51.898655 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-ztkll_8167c401-b19d-4215-9022-d299696fcb2f/node-ca/0.log" Mar 08 03:47:52.338074 master-0 kubenswrapper[33141]: I0308 03:47:52.338024 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/4.log" Mar 08 03:47:52.351810 master-0 kubenswrapper[33141]: I0308 03:47:52.351733 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/ingress-operator/5.log" Mar 08 03:47:52.365258 master-0 kubenswrapper[33141]: I0308 03:47:52.365222 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-4bpl8_197afe92-5912-4e90-a477-e3abe001bbc7/kube-rbac-proxy/0.log" Mar 08 03:47:52.852463 master-0 kubenswrapper[33141]: I0308 03:47:52.852419 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-fhncs_6176b631-3911-41cd-beb6-5bc2e924c3a7/serve-healthcheck-canary/0.log" Mar 08 03:47:53.332819 master-0 kubenswrapper[33141]: I0308 03:47:53.332749 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-9l8dc_2728b91e-d59a-4e85-b245-0f297e9377f9/insights-operator/0.log" Mar 08 03:47:53.336526 master-0 kubenswrapper[33141]: I0308 03:47:53.336474 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-9l8dc_2728b91e-d59a-4e85-b245-0f297e9377f9/insights-operator/1.log" Mar 08 03:47:53.792061 master-0 kubenswrapper[33141]: I0308 03:47:53.792007 33141 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-c4v6c/perf-node-gather-daemonset-5pxl8" Mar 08 03:47:54.623274 master-0 kubenswrapper[33141]: I0308 03:47:54.623217 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/alertmanager/0.log" Mar 08 03:47:54.636877 master-0 kubenswrapper[33141]: I0308 03:47:54.636816 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/config-reloader/0.log" Mar 08 03:47:54.649017 master-0 kubenswrapper[33141]: I0308 03:47:54.648966 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/kube-rbac-proxy-web/0.log" Mar 08 03:47:54.661894 master-0 kubenswrapper[33141]: I0308 03:47:54.661842 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/kube-rbac-proxy/0.log" Mar 08 03:47:54.679866 master-0 kubenswrapper[33141]: I0308 03:47:54.679812 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/kube-rbac-proxy-metric/0.log" Mar 08 03:47:54.691194 master-0 kubenswrapper[33141]: I0308 03:47:54.691142 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/prom-label-proxy/0.log" Mar 08 03:47:54.702057 master-0 kubenswrapper[33141]: I0308 03:47:54.702001 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_cbd6f132-3aa4-4114-9a59-e69aafa4cd1d/init-config-reloader/0.log" Mar 08 03:47:54.744539 master-0 kubenswrapper[33141]: I0308 03:47:54.744479 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-hzlxx_ed56c17f-7e15-4776-80a6-3ef091307e89/cluster-monitoring-operator/0.log" Mar 08 03:47:54.762599 master-0 kubenswrapper[33141]: I0308 03:47:54.762548 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vxn59_bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/kube-state-metrics/0.log" Mar 08 03:47:54.778717 master-0 kubenswrapper[33141]: I0308 03:47:54.778652 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vxn59_bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/kube-rbac-proxy-main/0.log" Mar 08 03:47:54.792395 master-0 kubenswrapper[33141]: I0308 03:47:54.792336 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-vxn59_bfc9ae4f-eb67-4ed1-97a1-d67e839fd601/kube-rbac-proxy-self/0.log" Mar 08 03:47:54.809449 master-0 kubenswrapper[33141]: I0308 03:47:54.809402 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-f8578dbbb-gzqxh_6701b05d-5128-437f-9c1c-6fbbf80d5db8/metrics-server/0.log" Mar 08 03:47:54.828401 master-0 kubenswrapper[33141]: I0308 03:47:54.828342 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-5ccd479c8c-v4t2c_9c708dee-3f8e-4c03-82bd-d94fec91ac44/monitoring-plugin/0.log" Mar 08 03:47:54.845895 master-0 kubenswrapper[33141]: I0308 03:47:54.845836 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sjs7q_beed862c-6283-4568-aa2e-f49b31e30a3b/node-exporter/0.log" Mar 08 03:47:54.855546 master-0 kubenswrapper[33141]: I0308 03:47:54.855499 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sjs7q_beed862c-6283-4568-aa2e-f49b31e30a3b/kube-rbac-proxy/0.log" Mar 08 03:47:54.871321 master-0 kubenswrapper[33141]: I0308 03:47:54.871269 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-sjs7q_beed862c-6283-4568-aa2e-f49b31e30a3b/init-textfile/0.log" Mar 08 03:47:54.893801 master-0 kubenswrapper[33141]: I0308 03:47:54.893675 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-wwmnn_16ca7ace-9608-4686-a039-a6ba6e3ab837/kube-rbac-proxy-main/0.log" Mar 08 03:47:54.906032 master-0 kubenswrapper[33141]: I0308 03:47:54.905980 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-wwmnn_16ca7ace-9608-4686-a039-a6ba6e3ab837/kube-rbac-proxy-self/0.log" Mar 08 03:47:54.921961 master-0 kubenswrapper[33141]: I0308 03:47:54.921913 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-wwmnn_16ca7ace-9608-4686-a039-a6ba6e3ab837/openshift-state-metrics/0.log" Mar 08 03:47:54.950219 master-0 kubenswrapper[33141]: I0308 03:47:54.950169 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/prometheus/0.log" Mar 08 03:47:54.965276 master-0 kubenswrapper[33141]: I0308 03:47:54.965232 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/config-reloader/0.log" Mar 08 03:47:54.976575 master-0 kubenswrapper[33141]: I0308 03:47:54.976527 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/thanos-sidecar/0.log" Mar 08 03:47:54.988185 master-0 kubenswrapper[33141]: I0308 03:47:54.988145 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/kube-rbac-proxy-web/0.log" Mar 08 03:47:55.001891 master-0 kubenswrapper[33141]: I0308 03:47:55.001844 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/kube-rbac-proxy/0.log" Mar 08 03:47:55.013359 master-0 kubenswrapper[33141]: I0308 03:47:55.013317 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/kube-rbac-proxy-thanos/0.log" Mar 08 03:47:55.028014 master-0 kubenswrapper[33141]: I0308 03:47:55.027962 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_100e5c55-6e5d-4393-8f05-f0a3bcf3a5cd/init-config-reloader/0.log" Mar 08 03:47:55.049246 master-0 kubenswrapper[33141]: I0308 03:47:55.049197 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-lkwmx_ae8f3a1e-689b-4107-993a-dde67f4decf2/prometheus-operator/0.log" Mar 08 03:47:55.058697 master-0 kubenswrapper[33141]: I0308 03:47:55.058654 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-lkwmx_ae8f3a1e-689b-4107-993a-dde67f4decf2/kube-rbac-proxy/0.log" Mar 08 03:47:55.076090 master-0 kubenswrapper[33141]: I0308 03:47:55.076043 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-8464df8497-dfmh2_8985dac1-38cf-41d1-b7cd-c2bfaf0f6ebc/prometheus-operator-admission-webhook/0.log" Mar 08 03:47:55.095642 master-0 kubenswrapper[33141]: I0308 03:47:55.095581 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5cb97dd5fc-g7fqr_302e483a-6d6f-4a41-b4d7-3d11898277f4/telemeter-client/0.log" Mar 08 03:47:55.107974 master-0 kubenswrapper[33141]: I0308 03:47:55.107928 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5cb97dd5fc-g7fqr_302e483a-6d6f-4a41-b4d7-3d11898277f4/reload/0.log" Mar 08 03:47:55.120164 master-0 kubenswrapper[33141]: I0308 03:47:55.120119 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-5cb97dd5fc-g7fqr_302e483a-6d6f-4a41-b4d7-3d11898277f4/kube-rbac-proxy/0.log" Mar 08 03:47:55.138482 master-0 kubenswrapper[33141]: I0308 03:47:55.138433 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/thanos-query/0.log" Mar 08 03:47:55.148390 master-0 kubenswrapper[33141]: I0308 03:47:55.148293 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/kube-rbac-proxy-web/0.log" Mar 08 03:47:55.159895 master-0 kubenswrapper[33141]: I0308 03:47:55.159857 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/kube-rbac-proxy/0.log" Mar 08 03:47:55.171100 master-0 kubenswrapper[33141]: I0308 03:47:55.171059 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/prom-label-proxy/0.log" Mar 08 03:47:55.180819 master-0 kubenswrapper[33141]: I0308 03:47:55.180767 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/kube-rbac-proxy-rules/0.log" Mar 08 03:47:55.193766 master-0 kubenswrapper[33141]: I0308 03:47:55.193703 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-cc54f9d45-86rbf_e26c5ed4-e811-4efd-a607-41e0953c1d8a/kube-rbac-proxy-metrics/0.log" Mar 08 03:47:55.917863 master-0 kubenswrapper[33141]: I0308 03:47:55.917768 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/kube-rbac-proxy/0.log" Mar 08 03:47:55.933664 master-0 kubenswrapper[33141]: I0308 03:47:55.933624 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/0.log" Mar 08 03:47:55.946560 master-0 kubenswrapper[33141]: I0308 03:47:55.946500 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-jd7rl_2ffe00fd-6834-4a5b-8b0b-b467d284f23c/cluster-autoscaler-operator/1.log" Mar 08 03:47:55.954625 master-0 kubenswrapper[33141]: I0308 03:47:55.954597 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/1.log" Mar 08 03:47:55.955732 master-0 kubenswrapper[33141]: I0308 03:47:55.955715 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/cluster-baremetal-operator/2.log" Mar 08 03:47:55.964972 master-0 kubenswrapper[33141]: I0308 03:47:55.964935 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-qgg4b_45212ce7-5f95-402e-93c4-83bac844f77d/baremetal-kube-rbac-proxy/0.log" Mar 08 03:47:55.986786 master-0 kubenswrapper[33141]: I0308 03:47:55.986744 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/0.log" Mar 08 03:47:55.988493 master-0 kubenswrapper[33141]: I0308 03:47:55.988464 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-zljww_c9f5e9d1-0163-4a96-92d4-dc27f9b2b0d6/control-plane-machine-set-operator/1.log" Mar 08 03:47:55.999392 master-0 kubenswrapper[33141]: I0308 03:47:55.999361 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/kube-rbac-proxy/0.log" Mar 08 03:47:56.010689 master-0 kubenswrapper[33141]: I0308 03:47:56.010638 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/1.log" Mar 08 03:47:56.011539 master-0 kubenswrapper[33141]: I0308 03:47:56.011501 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-5l4t7_8c65557b-9566-49f1-a049-fe492ca201b5/machine-api-operator/0.log" Mar 08 03:47:56.621854 master-0 kubenswrapper[33141]: I0308 03:47:56.621748 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/controller/0.log" Mar 08 03:47:56.632852 master-0 kubenswrapper[33141]: I0308 03:47:56.632799 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-v6lcp_78826ab3-1b89-4efe-9986-38e67fc8b8f1/kube-rbac-proxy/0.log" Mar 08 03:47:56.658175 master-0 kubenswrapper[33141]: I0308 03:47:56.658123 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/controller/0.log" Mar 08 03:47:56.755034 master-0 kubenswrapper[33141]: I0308 03:47:56.754466 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr/0.log" Mar 08 03:47:56.820119 master-0 kubenswrapper[33141]: I0308 03:47:56.820080 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/reloader/0.log" Mar 08 03:47:56.881559 master-0 kubenswrapper[33141]: I0308 03:47:56.881453 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/frr-metrics/0.log" Mar 08 03:47:56.899645 master-0 kubenswrapper[33141]: I0308 03:47:56.899613 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy/0.log" Mar 08 03:47:56.914050 master-0 kubenswrapper[33141]: I0308 03:47:56.914002 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/kube-rbac-proxy-frr/0.log" Mar 08 03:47:56.931501 master-0 kubenswrapper[33141]: I0308 03:47:56.931469 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-frr-files/0.log" Mar 08 03:47:56.949678 master-0 kubenswrapper[33141]: I0308 03:47:56.949641 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-reloader/0.log" Mar 08 03:47:56.963103 master-0 kubenswrapper[33141]: I0308 03:47:56.963058 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-mhfnq_c1220927-804a-457f-81bf-e599bac8f203/cp-metrics/0.log" Mar 08 03:47:56.981941 master-0 kubenswrapper[33141]: I0308 03:47:56.981870 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-hjjnv_30678329-c9f2-4958-9b2d-6bacd9250bbe/frr-k8s-webhook-server/0.log" Mar 08 03:47:57.012204 master-0 kubenswrapper[33141]: I0308 03:47:57.012052 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76b695cc4b-p6jt4_510c4395-781d-48ea-b253-247bc7bcc3f4/manager/0.log" Mar 08 03:47:57.029168 master-0 kubenswrapper[33141]: I0308 03:47:57.029120 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58cf648889-6c6hf_60613c6d-80bd-4b7c-9560-69b983dd71df/webhook-server/0.log" Mar 08 03:47:57.127877 master-0 kubenswrapper[33141]: I0308 03:47:57.127832 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/speaker/0.log" Mar 08 03:47:57.138298 master-0 kubenswrapper[33141]: I0308 03:47:57.138200 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhqp7_b1c3a32e-f5a0-43e2-8bad-7f1b5ec35f1d/kube-rbac-proxy/0.log" Mar 08 03:47:58.344044 master-0 kubenswrapper[33141]: I0308 03:47:58.343992 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/2.log" Mar 08 03:47:58.345516 master-0 kubenswrapper[33141]: I0308 03:47:58.345473 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-c4zs4_103158c5-c99f-4224-bf5a-e23b1aaf9172/cluster-node-tuning-operator/1.log" Mar 08 03:47:58.361939 master-0 kubenswrapper[33141]: I0308 03:47:58.361874 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-qjpkx_5d29f16f-e26f-4b9d-a646-230316e936a8/tuned/0.log" Mar 08 03:47:59.608332 master-0 kubenswrapper[33141]: I0308 03:47:59.608222 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/3.log" Mar 08 03:47:59.619696 master-0 kubenswrapper[33141]: I0308 03:47:59.619652 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-zcr8w_5a058138-8039-4841-821b-7ee5bb8648e4/kube-apiserver-operator/4.log" Mar 08 03:48:00.158073 master-0 kubenswrapper[33141]: I0308 03:48:00.158022 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0a2e5993-e0cb-4c63-9dda-abbb60bfe42b/installer/0.log" Mar 08 03:48:00.175343 master-0 kubenswrapper[33141]: I0308 03:48:00.175229 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_aea52bbe-5b64-45c7-8f8c-81d027f133d0/installer/0.log" Mar 08 03:48:00.200608 master-0 kubenswrapper[33141]: I0308 03:48:00.200526 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-retry-1-master-0_e6716923-7f46-438f-9cc4-c0f071ca5b1a/installer/0.log" Mar 08 03:48:00.221666 master-0 kubenswrapper[33141]: I0308 03:48:00.221588 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_0f958554-d0e0-4a2d-84e8-17e20ae7625c/installer/0.log" Mar 08 03:48:00.256320 master-0 kubenswrapper[33141]: I0308 03:48:00.256264 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_fd1a6545-ecae-4ade-a3ba-8d7b0d469f0f/installer/0.log" Mar 08 03:48:00.376139 master-0 kubenswrapper[33141]: I0308 03:48:00.376084 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/kube-apiserver/0.log" Mar 08 03:48:00.387008 master-0 kubenswrapper[33141]: I0308 03:48:00.386958 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/kube-apiserver-cert-syncer/0.log" Mar 08 03:48:00.405940 master-0 kubenswrapper[33141]: I0308 03:48:00.405841 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/kube-apiserver-cert-regeneration-controller/0.log" Mar 08 03:48:00.416196 master-0 kubenswrapper[33141]: I0308 03:48:00.416152 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/kube-apiserver-insecure-readyz/0.log" Mar 08 03:48:00.435595 master-0 kubenswrapper[33141]: I0308 03:48:00.435459 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/kube-apiserver-check-endpoints/0.log" Mar 08 03:48:00.448601 master-0 kubenswrapper[33141]: I0308 03:48:00.448548 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_5dbd3d3755bd0f9e4667c2fcf3fcf07d/setup/0.log" Mar 08 03:48:01.164582 master-0 kubenswrapper[33141]: I0308 03:48:01.164471 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/kube-rbac-proxy/0.log" Mar 08 03:48:01.180340 master-0 kubenswrapper[33141]: I0308 03:48:01.180262 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/2.log" Mar 08 03:48:01.180522 master-0 kubenswrapper[33141]: I0308 03:48:01.180480 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-rjwdp_7074cf90-9aa5-41ab-a4c4-c3e1a1044c1b/manager/1.log" Mar 08 03:48:01.631867 master-0 kubenswrapper[33141]: I0308 03:48:01.631805 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-7wr8x_420b9a36-158d-4468-924e-074e0e2c4f5c/cert-manager-controller/0.log" Mar 08 03:48:01.646676 master-0 kubenswrapper[33141]: I0308 03:48:01.646635 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-zs2k7_ccd87fae-c211-42ca-96ff-2631339fcfd3/cert-manager-cainjector/0.log" Mar 08 03:48:01.663044 master-0 kubenswrapper[33141]: I0308 03:48:01.663002 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-b8tpj_b76c541b-0854-4509-a480-63908cd11269/cert-manager-webhook/0.log" Mar 08 03:48:01.667176 master-0 kubenswrapper[33141]: I0308 03:48:01.667127 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-7wr8x_420b9a36-158d-4468-924e-074e0e2c4f5c/cert-manager-controller/0.log" Mar 08 03:48:01.687554 master-0 kubenswrapper[33141]: I0308 03:48:01.687501 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-zs2k7_ccd87fae-c211-42ca-96ff-2631339fcfd3/cert-manager-cainjector/0.log" Mar 08 03:48:01.697259 master-0 kubenswrapper[33141]: I0308 03:48:01.697202 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-b8tpj_b76c541b-0854-4509-a480-63908cd11269/cert-manager-webhook/0.log" Mar 08 03:48:02.199335 master-0 kubenswrapper[33141]: I0308 03:48:02.199287 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-c7l6p_c84683bd-71a1-47cf-a335-0954d7e82171/nmstate-console-plugin/0.log" Mar 08 03:48:02.310779 master-0 kubenswrapper[33141]: I0308 03:48:02.310721 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9tlm8_fe851503-1189-44d9-aaf7-2eb9b9b886a1/nmstate-handler/0.log" Mar 08 03:48:02.331978 master-0 kubenswrapper[33141]: I0308 03:48:02.331935 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/nmstate-metrics/0.log" Mar 08 03:48:02.348393 master-0 kubenswrapper[33141]: I0308 03:48:02.348347 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/kube-rbac-proxy/0.log" Mar 08 03:48:02.370787 master-0 kubenswrapper[33141]: I0308 03:48:02.370737 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-4rskc_fd3b4005-3ca5-4d51-b08e-0a71545c2990/nmstate-operator/0.log" Mar 08 03:48:02.385256 master-0 kubenswrapper[33141]: I0308 03:48:02.385193 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-c9mns_49c2416a-c985-49a6-b624-134998684fe6/nmstate-webhook/0.log" Mar 08 03:48:03.035343 master-0 kubenswrapper[33141]: I0308 03:48:03.035267 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-ffz9l_97c86970-ecaa-4aef-86b3-9a514a1de075/prometheus-operator/0.log" Mar 08 03:48:03.063368 master-0 kubenswrapper[33141]: I0308 03:48:03.063306 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67c4d85dd4-82kg8_1cf5f791-400d-4e37-8a8c-5c28d9fbb166/prometheus-operator-admission-webhook/0.log" Mar 08 03:48:03.083489 master-0 kubenswrapper[33141]: I0308 03:48:03.083456 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-67c4d85dd4-cn5c5_52519993-fb19-4251-96d1-3e9034236626/prometheus-operator-admission-webhook/0.log" Mar 08 03:48:03.102245 master-0 kubenswrapper[33141]: I0308 03:48:03.102203 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-v9hk7_13879810-602c-43af-a881-54d18130c358/operator/0.log" Mar 08 03:48:03.119761 master-0 kubenswrapper[33141]: I0308 03:48:03.119725 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tqxdk_607a5d1b-0fde-4771-afe2-9705030fe181/perses-operator/0.log" Mar 08 03:48:03.740575 master-0 kubenswrapper[33141]: I0308 03:48:03.740514 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/kube-multus-additional-cni-plugins/0.log" Mar 08 03:48:03.756208 master-0 kubenswrapper[33141]: I0308 03:48:03.756157 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/egress-router-binary-copy/0.log" Mar 08 03:48:03.774936 master-0 kubenswrapper[33141]: I0308 03:48:03.774810 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/cni-plugins/0.log" Mar 08 03:48:03.789027 master-0 kubenswrapper[33141]: I0308 03:48:03.788667 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/bond-cni-plugin/0.log" Mar 08 03:48:03.803128 master-0 kubenswrapper[33141]: I0308 03:48:03.803049 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/routeoverride-cni/0.log" Mar 08 03:48:03.827362 master-0 kubenswrapper[33141]: I0308 03:48:03.827298 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/whereabouts-cni-bincopy/0.log" Mar 08 03:48:03.842442 master-0 kubenswrapper[33141]: I0308 03:48:03.842353 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-c8gc6_d5eee869-c27f-4534-bbce-d954c42b36a3/whereabouts-cni/0.log" Mar 08 03:48:03.861993 master-0 kubenswrapper[33141]: I0308 03:48:03.861938 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-lxr7s_daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/multus-admission-controller/0.log" Mar 08 03:48:03.875505 master-0 kubenswrapper[33141]: I0308 03:48:03.875457 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-lxr7s_daf9e0ac-b5a3-4a3e-aa57-31b810f634ef/kube-rbac-proxy/0.log" Mar 08 03:48:03.969507 master-0 kubenswrapper[33141]: I0308 03:48:03.969456 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-jzw4f_a55bef81-2381-4036-b171-3dbc77e9c25d/kube-multus/0.log" Mar 08 03:48:04.000087 master-0 kubenswrapper[33141]: I0308 03:48:03.999948 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2l64n_f6ee6202-11e5-4586-ae46-075da1ad7f1a/network-metrics-daemon/0.log" Mar 08 03:48:04.018094 master-0 kubenswrapper[33141]: I0308 03:48:04.018012 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2l64n_f6ee6202-11e5-4586-ae46-075da1ad7f1a/kube-rbac-proxy/0.log" Mar 08 03:48:04.530548 master-0 kubenswrapper[33141]: I0308 03:48:04.530503 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-bfb8dcf9c-rfcbz_45e80f38-1789-4edc-8090-6bd26e1441bd/manager/0.log" Mar 08 03:48:04.545414 master-0 kubenswrapper[33141]: I0308 03:48:04.545359 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6wwxp_96fe6f11-1fc7-4887-920b-80ed59b73d66/vg-manager/1.log" Mar 08 03:48:04.549439 master-0 kubenswrapper[33141]: I0308 03:48:04.549395 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-6wwxp_96fe6f11-1fc7-4887-920b-80ed59b73d66/vg-manager/0.log" Mar 08 03:48:05.099417 master-0 kubenswrapper[33141]: I0308 03:48:05.099348 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_0a8d4b89-fd81-4418-9f72-c8447fad86ad/installer/0.log" Mar 08 03:48:05.118632 master-0 kubenswrapper[33141]: I0308 03:48:05.118572 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_6a7152f2-d51f-4e15-8e0a-92278cbecd53/installer/0.log" Mar 08 03:48:05.134454 master-0 kubenswrapper[33141]: I0308 03:48:05.134395 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-retry-1-master-0_627f0501-8b6a-4bc7-b610-355a0661f385/installer/0.log" Mar 08 03:48:05.149592 master-0 kubenswrapper[33141]: I0308 03:48:05.149523 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_2129802f-8b19-4eee-8ac3-1cb980b067b7/installer/0.log" Mar 08 03:48:05.167510 master-0 kubenswrapper[33141]: I0308 03:48:05.167233 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_4789137f-dcfe-4afa-9f1e-91546be2c979/installer/0.log" Mar 08 03:48:05.412627 master-0 kubenswrapper[33141]: I0308 03:48:05.412495 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_021a99d52e4f3f6d8ed4d016669c0eb8/kube-controller-manager/0.log" Mar 08 03:48:05.477287 master-0 kubenswrapper[33141]: I0308 03:48:05.477215 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_021a99d52e4f3f6d8ed4d016669c0eb8/cluster-policy-controller/0.log" Mar 08 03:48:05.487327 master-0 kubenswrapper[33141]: I0308 03:48:05.487290 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_021a99d52e4f3f6d8ed4d016669c0eb8/kube-controller-manager-cert-syncer/0.log" Mar 08 03:48:05.501394 master-0 kubenswrapper[33141]: I0308 03:48:05.501344 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_021a99d52e4f3f6d8ed4d016669c0eb8/kube-controller-manager-recovery-controller/0.log" Mar 08 03:48:06.119758 master-0 kubenswrapper[33141]: I0308 03:48:06.119686 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/3.log" Mar 08 03:48:06.152190 master-0 kubenswrapper[33141]: I0308 03:48:06.152129 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-xtwpr_2468d2a3-ec65-4888-a86a-3f66fa311f56/kube-controller-manager-operator/4.log" Mar 08 03:48:07.548054 master-0 kubenswrapper[33141]: I0308 03:48:07.548005 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-c7l6p_c84683bd-71a1-47cf-a335-0954d7e82171/nmstate-console-plugin/0.log" Mar 08 03:48:07.564419 master-0 kubenswrapper[33141]: I0308 03:48:07.564373 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9tlm8_fe851503-1189-44d9-aaf7-2eb9b9b886a1/nmstate-handler/0.log" Mar 08 03:48:07.583812 master-0 kubenswrapper[33141]: I0308 03:48:07.583759 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/nmstate-metrics/0.log" Mar 08 03:48:07.587309 master-0 kubenswrapper[33141]: I0308 03:48:07.587159 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_ddf7d93b-6a73-4de5-b984-cde6fba07b48/installer/0.log" Mar 08 03:48:07.597632 master-0 kubenswrapper[33141]: I0308 03:48:07.597598 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-b6x7j_56ce4272-f506-4729-a411-d59d530ed5ea/kube-rbac-proxy/0.log" Mar 08 03:48:07.613242 master-0 kubenswrapper[33141]: I0308 03:48:07.613184 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-4rskc_fd3b4005-3ca5-4d51-b08e-0a71545c2990/nmstate-operator/0.log" Mar 08 03:48:07.626403 master-0 kubenswrapper[33141]: I0308 03:48:07.626348 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-c9mns_49c2416a-c985-49a6-b624-134998684fe6/nmstate-webhook/0.log" Mar 08 03:48:07.631448 master-0 kubenswrapper[33141]: I0308 03:48:07.628369 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_89044116-4d25-4312-9475-c92acd031a98/installer/0.log" Mar 08 03:48:07.668982 master-0 kubenswrapper[33141]: I0308 03:48:07.668933 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 08 03:48:07.687031 master-0 kubenswrapper[33141]: I0308 03:48:07.686971 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 08 03:48:07.703206 master-0 kubenswrapper[33141]: I0308 03:48:07.703167 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-recovery-controller/0.log" Mar 08 03:48:07.716270 master-0 kubenswrapper[33141]: I0308 03:48:07.716198 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/wait-for-host-port/0.log" Mar 08 03:48:08.324987 master-0 kubenswrapper[33141]: I0308 03:48:08.324934 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/3.log" Mar 08 03:48:08.326294 master-0 kubenswrapper[33141]: I0308 03:48:08.326257 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-rz5c8_89e15db4-c541-4d53-878d-706fa022f970/kube-scheduler-operator-container/4.log" Mar 08 03:48:08.852571 master-0 kubenswrapper[33141]: I0308 03:48:08.852503 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-57ccdf9b5-rrfg6_3c336192-80ee-4d53-a4ec-710cba95fac6/migrator/0.log" Mar 08 03:48:08.865567 master-0 kubenswrapper[33141]: I0308 03:48:08.865467 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-57ccdf9b5-rrfg6_3c336192-80ee-4d53-a4ec-710cba95fac6/graceful-termination/0.log" Mar 08 03:48:09.306618 master-0 kubenswrapper[33141]: I0308 03:48:09.306550 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/4.log" Mar 08 03:48:09.306848 master-0 kubenswrapper[33141]: I0308 03:48:09.306783 33141 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-7f65c457f5-7k8j7_1d446527-f3fd-4a37-a980-7445031928d1/kube-storage-version-migrator-operator/3.log"